Test Report: Docker_Linux_crio 19651

                    
                      f000a69778791892f7d89fef6358d7150d12a198:2024-09-16:36236
                    
                

Test fail (55/306)

Order failed test Duration
31 TestAddons/serial/GCPAuth/Namespaces 0
33 TestAddons/parallel/Registry 12.89
34 TestAddons/parallel/Ingress 2
36 TestAddons/parallel/MetricsServer 323.56
37 TestAddons/parallel/HelmTiller 82.64
39 TestAddons/parallel/CSI 362.01
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 25.33
68 TestFunctional/serial/KubeContext 2.35
69 TestFunctional/serial/KubectlGetPods 2.3
82 TestFunctional/serial/ComponentHealth 2.03
85 TestFunctional/serial/InvalidService 0
88 TestFunctional/parallel/DashboardCmd 4.36
95 TestFunctional/parallel/ServiceCmdConnect 2.5
97 TestFunctional/parallel/PersistentVolumeClaim 79.91
101 TestFunctional/parallel/MySQL 2.45
107 TestFunctional/parallel/NodeLabels 2.12
113 TestFunctional/parallel/ServiceCmd/DeployApp 0
114 TestFunctional/parallel/ServiceCmd/List 0.38
115 TestFunctional/parallel/MountCmd/any-port 2.59
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
120 TestFunctional/parallel/ServiceCmd/Format 0.34
121 TestFunctional/parallel/ServiceCmd/URL 0.39
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 114.55
162 TestMultiControlPlane/serial/NodeLabels 2.08
167 TestMultiControlPlane/serial/RestartSecondaryNode 23.77
170 TestMultiControlPlane/serial/DeleteSecondaryNode 13.85
173 TestMultiControlPlane/serial/RestartCluster 81.19
229 TestMultiNode/serial/MultiNodeLabels 2.25
233 TestMultiNode/serial/StartAfterStop 11.33
235 TestMultiNode/serial/DeleteNode 7.77
237 TestMultiNode/serial/RestartMultiNode 55.38
251 TestKubernetesUpgrade 316.58
295 TestNetworkPlugins/group/auto/NetCatPod 1800.33
299 TestNetworkPlugins/group/kindnet/NetCatPod 1800.31
304 TestNetworkPlugins/group/calico/NetCatPod 1800.32
306 TestNetworkPlugins/group/enable-default-cni/NetCatPod 1800.47
310 TestNetworkPlugins/group/flannel/NetCatPod 1800.31
313 TestNetworkPlugins/group/bridge/NetCatPod 1800.33
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 1800.31
319 TestStartStop/group/old-k8s-version/serial/DeployApp 3.7
320 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.71
323 TestStartStop/group/old-k8s-version/serial/SecondStart 377.13
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.96
330 TestStartStop/group/no-preload/serial/DeployApp 3.53
331 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.58
336 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.91
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 3.66
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.61
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 7.02
363 TestStartStop/group/embed-certs/serial/DeployApp 3.6
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.6
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.96
x
+
TestAddons/serial/GCPAuth/Namespaces (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-821781 create ns new-namespace
addons_test.go:656: (dbg) Non-zero exit: kubectl --context addons-821781 create ns new-namespace: fork/exec /usr/local/bin/kubectl: exec format error (564.82µs)
addons_test.go:658: kubectl --context addons-821781 create ns new-namespace failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/serial/GCPAuth/Namespaces (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.831171ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003941141s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003363247s
addons_test.go:342: (dbg) Run:  kubectl --context addons-821781 delete po -l run=registry-test --now
addons_test.go:342: (dbg) Non-zero exit: kubectl --context addons-821781 delete po -l run=registry-test --now: fork/exec /usr/local/bin/kubectl: exec format error (480.353µs)
addons_test.go:344: pre-cleanup kubectl --context addons-821781 delete po -l run=registry-test --now failed: fork/exec /usr/local/bin/kubectl: exec format error (not a problem)
addons_test.go:347: (dbg) Run:  kubectl --context addons-821781 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": fork/exec /usr/local/bin/kubectl: exec format error (294.288µs)
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-821781 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got **
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 ip
2024/09/16 10:26:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-821781
helpers_test.go:235: (dbg) docker inspect addons-821781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9",
	        "Created": "2024-09-16T10:23:34.422231958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:34.564816551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hosts",
	        "LogPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9-json.log",
	        "Name": "/addons-821781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-821781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-821781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-821781",
	                "Source": "/var/lib/docker/volumes/addons-821781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-821781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-821781",
	                "name.minikube.sigs.k8s.io": "addons-821781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb89cb54fc4711f104a02c8d2ebaaa0dae68769e21054477c7dd719ee876c61d",
	            "SandboxKey": "/var/run/docker/netns/cb89cb54fc47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-821781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "66d8d4a2fe0f9ff012a57288f3992a27df27bc2a73eb33a40ff3adbc0fa270ea",
	                    "EndpointID": "54da588c62c62ca60fdaac7dbe299e76b7fad63e791a3bfc770a096d3640b2fb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-821781",
	                        "60dd933522c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-821781 -n addons-821781
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-821781 logs -n 25: (1.423180821s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-534059              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-920673              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-291625               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-291625            | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-597115                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44611               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-597115              | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | disable dashboard -p                 | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| start   | -p addons-821781 --wait=true         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:26 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| ip      | addons-821781 ip                     | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:11.785613   12642 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:11.786005   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786020   12642 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:11.786026   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786201   12642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:23:11.786846   12642 out.go:352] Setting JSON to false
	I0916 10:23:11.787652   12642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":332,"bootTime":1726481860,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:11.787744   12642 start.go:139] virtualization: kvm guest
	I0916 10:23:11.789971   12642 out.go:177] * [addons-821781] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:11.791581   12642 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:11.791602   12642 notify.go:220] Checking for updates...
	I0916 10:23:11.793279   12642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:11.794876   12642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:11.796234   12642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:23:11.797605   12642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:11.798881   12642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:11.800381   12642 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:11.822354   12642 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:11.822435   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.875294   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.865218731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.875392   12642 docker.go:318] overlay module found
	I0916 10:23:11.877179   12642 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:11.878539   12642 start.go:297] selected driver: docker
	I0916 10:23:11.878555   12642 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:11.878567   12642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:11.879376   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.928080   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.918595521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.928248   12642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:11.928460   12642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:11.930314   12642 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:11.931824   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:11.931880   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:11.931896   12642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:11.931970   12642 start.go:340] cluster config:
	{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:11.933478   12642 out.go:177] * Starting "addons-821781" primary control-plane node in "addons-821781" cluster
	I0916 10:23:11.934979   12642 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:23:11.936645   12642 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:11.938033   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:11.938077   12642 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:23:11.938086   12642 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:11.938151   12642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:11.938181   12642 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:11.938195   12642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:23:11.938528   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:11.938559   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json: {Name:mkb2d65543ac9e0f1211fb3bb619eaf59705ab34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:11.954455   12642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:11.954550   12642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:11.954565   12642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:11.954570   12642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:11.954578   12642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:11.954585   12642 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:24.468174   12642 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:24.468219   12642 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:24.468270   12642 start.go:360] acquireMachinesLock for addons-821781: {Name:mk2b69b21902e1a037d888f1a4c14b20c068c000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:24.468392   12642 start.go:364] duration metric: took 101µs to acquireMachinesLock for "addons-821781"
	I0916 10:23:24.468422   12642 start.go:93] Provisioning new machine with config: &{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:24.468511   12642 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:24.470800   12642 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:24.471033   12642 start.go:159] libmachine.API.Create for "addons-821781" (driver="docker")
	I0916 10:23:24.471057   12642 client.go:168] LocalClient.Create starting
	I0916 10:23:24.471161   12642 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:23:24.563569   12642 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:23:24.843226   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:24.859906   12642 cli_runner.go:211] docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:24.859982   12642 network_create.go:284] running [docker network inspect addons-821781] to gather additional debugging logs...
	I0916 10:23:24.860006   12642 cli_runner.go:164] Run: docker network inspect addons-821781
	W0916 10:23:24.875695   12642 cli_runner.go:211] docker network inspect addons-821781 returned with exit code 1
	I0916 10:23:24.875725   12642 network_create.go:287] error running [docker network inspect addons-821781]: docker network inspect addons-821781: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-821781 not found
	I0916 10:23:24.875736   12642 network_create.go:289] output of [docker network inspect addons-821781]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-821781 not found
	
	** /stderr **
	I0916 10:23:24.875825   12642 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:24.892396   12642 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019c5ea0}
	I0916 10:23:24.892450   12642 network_create.go:124] attempt to create docker network addons-821781 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:24.892494   12642 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-821781 addons-821781
	I0916 10:23:24.956362   12642 network_create.go:108] docker network addons-821781 192.168.49.0/24 created
	I0916 10:23:24.956397   12642 kic.go:121] calculated static IP "192.168.49.2" for the "addons-821781" container
	I0916 10:23:24.956461   12642 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:24.972596   12642 cli_runner.go:164] Run: docker volume create addons-821781 --label name.minikube.sigs.k8s.io=addons-821781 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:24.991422   12642 oci.go:103] Successfully created a docker volume addons-821781
	I0916 10:23:24.991492   12642 cli_runner.go:164] Run: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:29.942508   12642 cli_runner.go:217] Completed: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.950978249s)
	I0916 10:23:29.942530   12642 oci.go:107] Successfully prepared a docker volume addons-821781
	I0916 10:23:29.942541   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:29.942558   12642 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:29.942601   12642 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:34.358289   12642 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.415644078s)
	I0916 10:23:34.358318   12642 kic.go:203] duration metric: took 4.415757339s to extract preloaded images to volume ...
	W0916 10:23:34.358449   12642 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:34.358539   12642 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:34.407126   12642 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-821781 --name addons-821781 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-821781 --network addons-821781 --ip 192.168.49.2 --volume addons-821781:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:34.740907   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Running}}
	I0916 10:23:34.761456   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:34.779743   12642 cli_runner.go:164] Run: docker exec addons-821781 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:34.825817   12642 oci.go:144] the created container "addons-821781" has a running status.
	I0916 10:23:34.825843   12642 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa...
	I0916 10:23:35.044132   12642 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:35.071224   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.090107   12642 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:35.090127   12642 kic_runner.go:114] Args: [docker exec --privileged addons-821781 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:35.145473   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.163175   12642 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:35.163257   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.181284   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.181510   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.181525   12642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:35.376812   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.376844   12642 ubuntu.go:169] provisioning hostname "addons-821781"
	I0916 10:23:35.376907   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.394400   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.394569   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.394582   12642 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-821781 && echo "addons-821781" | sudo tee /etc/hostname
	I0916 10:23:35.535760   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.535841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.554208   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.554394   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.554410   12642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-821781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-821781/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-821781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:35.685491   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:35.685520   12642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:23:35.685538   12642 ubuntu.go:177] setting up certificates
	I0916 10:23:35.685549   12642 provision.go:84] configureAuth start
	I0916 10:23:35.685599   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:35.701932   12642 provision.go:143] copyHostCerts
	I0916 10:23:35.702012   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:23:35.702151   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:23:35.702230   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:23:35.702295   12642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.addons-821781 san=[127.0.0.1 192.168.49.2 addons-821781 localhost minikube]
	I0916 10:23:35.783034   12642 provision.go:177] copyRemoteCerts
	I0916 10:23:35.783097   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:35.783127   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.800161   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:35.893913   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:23:35.915296   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:23:35.937405   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:35.959050   12642 provision.go:87] duration metric: took 273.490922ms to configureAuth
	I0916 10:23:35.959082   12642 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:35.959246   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:35.959337   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.977055   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.977247   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.977264   12642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:23:36.194829   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:23:36.194851   12642 machine.go:96] duration metric: took 1.031655385s to provisionDockerMachine
	I0916 10:23:36.194860   12642 client.go:171] duration metric: took 11.723797841s to LocalClient.Create
	I0916 10:23:36.194875   12642 start.go:167] duration metric: took 11.723845183s to libmachine.API.Create "addons-821781"
	I0916 10:23:36.194883   12642 start.go:293] postStartSetup for "addons-821781" (driver="docker")
	I0916 10:23:36.194895   12642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:36.194953   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:36.194987   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.212136   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.306296   12642 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:36.309608   12642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:36.309638   12642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:36.309646   12642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:36.309652   12642 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:36.309662   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:23:36.309721   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:23:36.309744   12642 start.go:296] duration metric: took 114.855265ms for postStartSetup
	I0916 10:23:36.310017   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.326531   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:36.326849   12642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:36.326901   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.343127   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.434151   12642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:36.438063   12642 start.go:128] duration metric: took 11.969538805s to createHost
	I0916 10:23:36.438087   12642 start.go:83] releasing machines lock for "addons-821781", held for 11.96968194s
	I0916 10:23:36.438170   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.454099   12642 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:36.454144   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.454204   12642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:36.454276   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.472027   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.473599   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.640610   12642 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:36.644626   12642 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:23:36.780722   12642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:36.785109   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.802933   12642 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:36.803016   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.830084   12642 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:36.830106   12642 start.go:495] detecting cgroup driver to use...
	I0916 10:23:36.830135   12642 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:36.830178   12642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:23:36.843678   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:23:36.854207   12642 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:36.854255   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:36.867323   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:36.880430   12642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:36.955777   12642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:37.035979   12642 docker.go:233] disabling docker service ...
	I0916 10:23:37.036049   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:37.052780   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:37.063200   12642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:37.138165   12642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:37.215004   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:37.225051   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:37.239114   12642 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:23:37.239176   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.248375   12642 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:23:37.248431   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.257180   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.265957   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.274955   12642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:37.283271   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.291833   12642 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.305478   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.314242   12642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:37.321530   12642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:37.328860   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.397743   12642 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:23:37.494696   12642 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:23:37.494784   12642 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:23:37.498069   12642 start.go:563] Will wait 60s for crictl version
	I0916 10:23:37.498121   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:23:37.501763   12642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:37.533845   12642 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:23:37.533971   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.568210   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.602768   12642 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:23:37.604266   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:37.620164   12642 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:37.623594   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.633351   12642 kubeadm.go:883] updating cluster {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:37.633481   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:37.633537   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.691488   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.691513   12642 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:23:37.691557   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.721834   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.721855   12642 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:37.721863   12642 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:23:37.721943   12642 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-821781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:37.722004   12642 ssh_runner.go:195] Run: crio config
	I0916 10:23:37.761799   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:37.761826   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:37.761837   12642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:37.761858   12642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-821781 NodeName:addons-821781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:37.761998   12642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-821781"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:37.762053   12642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:37.770243   12642 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:37.770305   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:37.778774   12642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:23:37.794482   12642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:37.810783   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0916 10:23:37.827097   12642 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:37.830351   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.840395   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.914798   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:37.926573   12642 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781 for IP: 192.168.49.2
	I0916 10:23:37.926602   12642 certs.go:194] generating shared ca certs ...
	I0916 10:23:37.926624   12642 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:37.926767   12642 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:23:38.165524   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt ...
	I0916 10:23:38.165552   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt: {Name:mk958b9d7b4e596cca12a43812b033701a1808ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165715   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key ...
	I0916 10:23:38.165727   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key: {Name:mk218c15b5e68b365653a5a88f283b4fd2a63397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165796   12642 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:23:38.317748   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt ...
	I0916 10:23:38.317782   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt: {Name:mke289e24f4d60c196cc49c14787f9db71cc62b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.317972   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key ...
	I0916 10:23:38.317984   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key: {Name:mk238a3132478eab5de811cbc3626e41ad1154f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.318059   12642 certs.go:256] generating profile certs ...
	I0916 10:23:38.318110   12642 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key
	I0916 10:23:38.318136   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt with IP's: []
	I0916 10:23:38.579861   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt ...
	I0916 10:23:38.579894   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: {Name:mk21e84efd5822ab69a95d39a845706a794c0061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580087   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key ...
	I0916 10:23:38.580102   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key: {Name:mkafbaeecfaf57db916f1469c60f36a7c0603c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580202   12642 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e
	I0916 10:23:38.580226   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:38.661523   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e ...
	I0916 10:23:38.661551   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e: {Name:mk3603fd200d1d0c9c664f1f9e2d3f37d0da819e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661721   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e ...
	I0916 10:23:38.661734   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e: {Name:mk979e39754dc7623208af4e4f8346a3268b5e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661802   12642 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt
	I0916 10:23:38.661872   12642 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key
	I0916 10:23:38.661916   12642 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key
	I0916 10:23:38.661934   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt with IP's: []
	I0916 10:23:38.868848   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt ...
	I0916 10:23:38.868882   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt: {Name:mk60143e6be001872095f4a07cc8800f3883cb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869061   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key ...
	I0916 10:23:38.869072   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key: {Name:mkfcb902307b78d6d49e6123539922887bdc7bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869254   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:38.869291   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:23:38.869321   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:38.869365   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:38.869947   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:38.891875   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:38.913044   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:38.935301   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:38.957638   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:38.978769   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:38.999283   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:39.020509   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:39.041006   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:39.062022   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:39.077689   12642 ssh_runner.go:195] Run: openssl version
	I0916 10:23:39.082828   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:39.091794   12642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094851   12642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094909   12642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.101357   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:39.110237   12642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:39.113275   12642 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:39.113343   12642 kubeadm.go:392] StartCluster: {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:39.113424   12642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:39.113461   12642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:39.147213   12642 cri.go:89] found id: ""
	I0916 10:23:39.147277   12642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:39.155102   12642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:39.162655   12642 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:39.162713   12642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:39.170269   12642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:39.170287   12642 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:39.170331   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:39.177944   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:39.178006   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:39.185617   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:39.193448   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:39.193494   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:39.201778   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.209504   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:39.209560   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.217167   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:39.224794   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:39.224851   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:39.232091   12642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:39.267943   12642 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:39.268041   12642 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:39.285854   12642 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:39.285924   12642 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:39.285968   12642 kubeadm.go:310] OS: Linux
	I0916 10:23:39.286011   12642 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:39.286080   12642 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:39.286143   12642 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:39.286205   12642 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:39.286307   12642 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:39.286389   12642 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:39.286430   12642 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:39.286498   12642 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:39.286566   12642 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:39.334020   12642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:39.334137   12642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:39.334277   12642 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:39.339811   12642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:39.342965   12642 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:39.343081   12642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:39.343174   12642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:39.501471   12642 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:39.656891   12642 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:39.803369   12642 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:39.956554   12642 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:40.122217   12642 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:40.122346   12642 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.178788   12642 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:40.178946   12642 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.253274   12642 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:40.444072   12642 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:40.539814   12642 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:40.539908   12642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:40.740107   12642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:40.805609   12642 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:41.114974   12642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:41.183175   12642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:41.287722   12642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:41.288131   12642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:41.290675   12642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:41.293432   12642 out.go:235]   - Booting up control plane ...
	I0916 10:23:41.293554   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:41.293636   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:41.293726   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:41.302536   12642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:41.307914   12642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:41.307975   12642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:41.387469   12642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:41.387659   12642 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:41.889098   12642 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.704632ms
	I0916 10:23:41.889216   12642 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:46.391264   12642 kubeadm.go:310] [api-check] The API server is healthy after 4.502175176s
	I0916 10:23:46.402989   12642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:46.412298   12642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:46.429664   12642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:46.429953   12642 kubeadm.go:310] [mark-control-plane] Marking the node addons-821781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:46.439045   12642 kubeadm.go:310] [bootstrap-token] Using token: 08e8kf.82j5psgo1mt86ygt
	I0916 10:23:46.440988   12642 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:46.441118   12642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:46.443591   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:46.448741   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:46.451033   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:46.453482   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:46.457052   12642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:46.798062   12642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:47.220263   12642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:47.797780   12642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:47.798623   12642 kubeadm.go:310] 
	I0916 10:23:47.798710   12642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:47.798722   12642 kubeadm.go:310] 
	I0916 10:23:47.798838   12642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:47.798858   12642 kubeadm.go:310] 
	I0916 10:23:47.798897   12642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:47.798955   12642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:47.799030   12642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:47.799050   12642 kubeadm.go:310] 
	I0916 10:23:47.799117   12642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:47.799125   12642 kubeadm.go:310] 
	I0916 10:23:47.799191   12642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:47.799202   12642 kubeadm.go:310] 
	I0916 10:23:47.799273   12642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:47.799371   12642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:47.799433   12642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:47.799458   12642 kubeadm.go:310] 
	I0916 10:23:47.799618   12642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:47.799702   12642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:47.799727   12642 kubeadm.go:310] 
	I0916 10:23:47.799855   12642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800005   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:23:47.800028   12642 kubeadm.go:310] 	--control-plane 
	I0916 10:23:47.800034   12642 kubeadm.go:310] 
	I0916 10:23:47.800137   12642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:47.800147   12642 kubeadm.go:310] 
	I0916 10:23:47.800244   12642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800384   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:23:47.802505   12642 kubeadm.go:310] W0916 10:23:39.265300    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.802965   12642 kubeadm.go:310] W0916 10:23:39.265967    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.803297   12642 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:47.803488   12642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:47.803508   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:47.803517   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:47.805594   12642 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:47.806930   12642 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:47.811723   12642 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:47.811744   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:47.829314   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:48.045373   12642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:48.045433   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.045434   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-821781 minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-821781 minikube.k8s.io/primary=true
	I0916 10:23:48.053143   12642 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:48.121750   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.622580   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.121829   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.622144   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.122640   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.622473   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.122549   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.622693   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.122279   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.622129   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.815735   12642 kubeadm.go:1113] duration metric: took 4.770357411s to wait for elevateKubeSystemPrivileges
	I0916 10:23:52.815769   12642 kubeadm.go:394] duration metric: took 13.702442151s to StartCluster
	I0916 10:23:52.815790   12642 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.815914   12642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:52.816324   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.816539   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:52.816545   12642 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:52.816616   12642 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:52.816735   12642 addons.go:69] Setting yakd=true in profile "addons-821781"
	I0916 10:23:52.816749   12642 addons.go:69] Setting ingress-dns=true in profile "addons-821781"
	I0916 10:23:52.816756   12642 addons.go:69] Setting default-storageclass=true in profile "addons-821781"
	I0916 10:23:52.816766   12642 addons.go:69] Setting inspektor-gadget=true in profile "addons-821781"
	I0916 10:23:52.816771   12642 addons.go:234] Setting addon ingress-dns=true in "addons-821781"
	I0916 10:23:52.816777   12642 addons.go:234] Setting addon inspektor-gadget=true in "addons-821781"
	I0916 10:23:52.816781   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.816788   12642 addons.go:69] Setting cloud-spanner=true in profile "addons-821781"
	I0916 10:23:52.816798   12642 addons.go:234] Setting addon cloud-spanner=true in "addons-821781"
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816821   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816815   12642 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-821781"
	I0916 10:23:52.816831   12642 addons.go:69] Setting volumesnapshots=true in profile "addons-821781"
	I0916 10:23:52.816846   12642 addons.go:234] Setting addon volumesnapshots=true in "addons-821781"
	I0916 10:23:52.816852   12642 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-821781"
	I0916 10:23:52.816859   12642 addons.go:69] Setting gcp-auth=true in profile "addons-821781"
	I0916 10:23:52.816864   12642 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-821781"
	I0916 10:23:52.816869   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816875   12642 mustload.go:65] Loading cluster: addons-821781
	I0916 10:23:52.816879   12642 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:52.816885   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816897   12642 addons.go:69] Setting ingress=true in profile "addons-821781"
	I0916 10:23:52.816908   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816914   12642 addons.go:234] Setting addon ingress=true in "addons-821781"
	I0916 10:23:52.816821   12642 addons.go:69] Setting storage-provisioner=true in profile "addons-821781"
	I0916 10:23:52.816951   12642 addons.go:234] Setting addon storage-provisioner=true in "addons-821781"
	I0916 10:23:52.816952   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816967   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816991   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.817237   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817375   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816847   12642 addons.go:69] Setting helm-tiller=true in profile "addons-821781"
	I0916 10:23:52.817387   12642 addons.go:69] Setting registry=true in profile "addons-821781"
	I0916 10:23:52.817393   12642 addons.go:234] Setting addon helm-tiller=true in "addons-821781"
	I0916 10:23:52.817398   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817399   12642 addons.go:234] Setting addon registry=true in "addons-821781"
	I0916 10:23:52.817413   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817421   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817453   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817460   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817835   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817839   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.818548   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816758   12642 addons.go:234] Setting addon yakd=true in "addons-821781"
	I0916 10:23:52.818812   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816831   12642 addons.go:69] Setting metrics-server=true in profile "addons-821781"
	I0916 10:23:52.819624   12642 addons.go:234] Setting addon metrics-server=true in "addons-821781"
	I0916 10:23:52.819661   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816777   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-821781"
	I0916 10:23:52.820048   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820121   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820925   12642 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:52.817377   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.823819   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:52.819369   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817378   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816830   12642 addons.go:69] Setting volcano=true in profile "addons-821781"
	I0916 10:23:52.827260   12642 addons.go:234] Setting addon volcano=true in "addons-821781"
	I0916 10:23:52.827341   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.827903   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816822   12642 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-821781"
	I0916 10:23:52.828667   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-821781"
	I0916 10:23:52.846468   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.849708   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.849779   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.858180   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:52.860117   12642 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:52.861491   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:52.861515   12642 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:52.861580   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.861792   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:52.863536   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:52.865265   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:52.868592   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:52.871812   12642 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:52.873467   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:52.873491   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:52.873553   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.873826   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:52.875500   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:52.876891   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:52.878274   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:52.878295   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:52.878358   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.885380   12642 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:52.887180   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:52.887200   12642 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:52.887253   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.887590   12642 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:52.889278   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:52.889293   12642 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:52.891126   12642 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:52.891146   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:52.891207   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.891375   12642 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:52.893052   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.893213   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:52.893225   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:52.893284   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.895906   12642 addons.go:234] Setting addon default-storageclass=true in "addons-821781"
	I0916 10:23:52.895950   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.896395   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.902602   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.904755   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:52.904779   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:52.904841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.913208   12642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:52.916490   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:52.916516   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:52.916578   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.920102   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.921373   12642 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:52.924287   12642 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:52.924310   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:52.924367   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.924567   12642 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:52.924966   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.927248   12642 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:52.927271   12642 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:52.927324   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	W0916 10:23:52.939182   12642 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:23:52.945562   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.947311   12642 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:52.949640   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:52.949813   12642 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:52.949828   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:52.949883   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.950915   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:52.950951   12642 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:52.951010   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.967061   12642 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-821781"
	I0916 10:23:52.967112   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.967600   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.976558   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.977128   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979407   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979587   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979666   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982295   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982301   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984209   12642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:52.984228   12642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:52.984267   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984282   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.985867   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.992433   12642 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:52.996036   12642 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:52.998876   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:52.998899   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:52.998966   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:53.007398   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.031542   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.198285   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:53.222232   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:53.223607   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:53.303303   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:53.303391   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:53.412003   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:53.494460   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:53.495317   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:53.495388   12642 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:53.500279   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:53.500366   12642 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:53.518431   12642 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:53.518460   12642 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:53.595357   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:53.595389   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:53.595502   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:53.595520   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:53.601235   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:53.601265   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:53.603514   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:53.610819   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:53.613851   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:53.696891   12642 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:53.696920   12642 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:53.697186   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:53.711949   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:53.711981   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:53.793955   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:53.794047   12642 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:53.795627   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:53.795652   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:53.810579   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:53.810623   12642 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:53.818121   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:53.818143   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:54.008884   12642 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:54.008915   12642 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:54.097416   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:54.097502   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:54.105048   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:54.114541   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:54.116113   12642 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.116175   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:54.194093   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:54.194181   12642 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:54.310015   12642 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:54.310107   12642 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:54.315950   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:54.316029   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:54.409828   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.595664   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:54.595750   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:54.795049   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:54.795131   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:54.795986   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:54.796042   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:54.798857   12642 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.60047423s)
	I0916 10:23:54.798970   12642 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:54.798946   12642 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.576635993s)
	I0916 10:23:54.799977   12642 node_ready.go:35] waiting up to 6m0s for node "addons-821781" to be "Ready" ...
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:54.816489   12642 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:54.816544   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:55.096307   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:55.096398   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:55.098163   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:55.303720   12642 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:55.303802   12642 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:55.310866   12642 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:55.310939   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:55.509740   12642 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-821781" context rescaled to 1 replicas
	I0916 10:23:55.603909   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:55.603992   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:55.609116   12642 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:55.609197   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:55.701381   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:56.095470   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:56.095499   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:56.106357   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:56.115945   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.892303376s)
	I0916 10:23:56.209795   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:56.209873   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:56.410426   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:56.410515   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:56.511332   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.511408   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:56.813818   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.895029   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:58.497986   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.085861545s)
	I0916 10:23:58.498185   12642 addons.go:475] Verifying addon ingress=true in "addons-821781"
	I0916 10:23:58.498214   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.894594589s)
	I0916 10:23:58.498365   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.801136889s)
	I0916 10:23:58.498429   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.393306067s)
	I0916 10:23:58.498499   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.383877389s)
	I0916 10:23:58.498516   12642 addons.go:475] Verifying addon metrics-server=true in "addons-821781"
	I0916 10:23:58.498551   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.08869279s)
	I0916 10:23:58.498561   12642 addons.go:475] Verifying addon registry=true in "addons-821781"
	I0916 10:23:58.498687   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.40044143s)
	I0916 10:23:58.498148   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.003579441s)
	I0916 10:23:58.498265   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.887343223s)
	I0916 10:23:58.498721   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.884394452s)
	I0916 10:23:58.500166   12642 out.go:177] * Verifying registry addon...
	I0916 10:23:58.500186   12642 out.go:177] * Verifying ingress addon...
	I0916 10:23:58.500168   12642 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-821781 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:58.502840   12642 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:23:58.502984   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0916 10:23:58.505976   12642 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:23:58.508066   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:58.508081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:58.508299   12642 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:23:58.508315   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.012329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.110843   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.299182   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.597694462s)
	W0916 10:23:59.299228   12642 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299250   12642 retry.go:31] will retry after 144.288551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299277   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.19282086s)
	I0916 10:23:59.305158   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:59.444238   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.506924   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.507806   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.539307   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.725399907s)
	I0916 10:23:59.539335   12642 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:59.541718   12642 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:59.543660   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:59.597366   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:59.597452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.006951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.007539   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.096393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.099134   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:00.099205   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.125424   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.418412   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:00.508361   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.509838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.518754   12642 addons.go:234] Setting addon gcp-auth=true in "addons-821781"
	I0916 10:24:00.518809   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:24:00.519365   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:24:00.536851   12642 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:00.536902   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.553493   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.596428   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.006170   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.006803   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.047121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.506287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.506534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.547185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.805560   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:02.007448   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.008038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.046600   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.202834   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.758545356s)
	I0916 10:24:02.202854   12642 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.665973141s)
	I0916 10:24:02.205053   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:02.206664   12642 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:02.208283   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:02.208296   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:02.226305   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:02.226333   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:02.244167   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.244187   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:02.298853   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.506489   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.506968   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.547297   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.899621   12642 addons.go:475] Verifying addon gcp-auth=true in "addons-821781"
	I0916 10:24:02.901591   12642 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:02.904224   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:02.907029   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:02.907051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.007207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.007880   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.047134   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.407111   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.506509   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.507075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.547522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.907027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.007265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.007643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.046594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.303245   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:04.407879   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.506365   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.506939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.547412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.907817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.006397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.007232   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.047038   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.407918   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.506892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.507154   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.547266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.907671   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.006358   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.006625   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.046717   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.407766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.506364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.506750   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.547000   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.803631   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:06.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.006037   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.006551   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.046971   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.407314   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.506338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.547256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.907021   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.005785   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.006334   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.046439   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.408357   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.505952   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.506643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.547247   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.803661   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:08.907343   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.006189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.046966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.407657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.506182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.506608   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.546942   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.907283   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.005977   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.006337   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.046685   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.408104   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.507241   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.547393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.907115   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.005778   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.006115   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.047296   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.302797   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:11.407398   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.506075   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.506794   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.546885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.907330   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.006053   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.046997   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.407912   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.506528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.507006   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.547228   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.907413   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.006062   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.006437   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.303472   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:13.407845   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.506423   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.547162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.907106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.005737   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.006410   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.047326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.407189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.505915   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.506316   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.547399   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.907535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.007080   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.046972   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.407693   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.506219   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.506709   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.547052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.803455   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:15.907823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.006647   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.007106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.047456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.407960   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.506331   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.547157   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.907551   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.006299   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.006617   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.047040   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.406899   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.506449   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.506938   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.547210   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.907861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.006488   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.006990   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.046795   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.303390   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:18.408194   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.505660   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.506075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.547467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.908947   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.006658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.007120   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.047574   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.407694   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.506237   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.506764   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.546743   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.907775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.006250   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.006926   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.046950   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.407914   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.506444   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.506893   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.547165   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.802891   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:20.908266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.006168   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.006661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.046763   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.407620   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.506280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.506758   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.547207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.907808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.006390   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.006832   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.047258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.407294   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.506192   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.506573   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.546892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.803612   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:22.907631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.006412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.006789   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.407703   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.506242   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.506922   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.546531   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.907989   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.006557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.007064   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.047256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.407245   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.506027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.506326   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.546265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.907143   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.006149   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.006574   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.303085   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:25.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.506502   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.506958   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.549041   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.907130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.005689   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.006094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.047573   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.407949   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.506465   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.506873   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.547130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.907930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.006498   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.006899   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.047132   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.303541   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:27.407076   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.505560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.506083   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.547418   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.907322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.006007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.006289   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.046769   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.506106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.506493   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.547121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.907052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.005692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.006125   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.047636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.407566   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.506440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.506780   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.547158   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.802646   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:29.907185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.005875   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.006320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.046391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.407344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.505998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.506431   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.546833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.006755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.007344   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.047565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.407650   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.506485   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.506906   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.547281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.803334   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:31.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.006411   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.006716   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.047171   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.407108   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.505792   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.506357   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.547493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.907787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.007161   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.047511   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.407346   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.506125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.506509   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.547645   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.803187   12642 node_ready.go:49] node "addons-821781" has status "Ready":"True"
	I0916 10:24:33.803213   12642 node_ready.go:38] duration metric: took 39.003174602s for node "addons-821781" to be "Ready" ...
	I0916 10:24:33.803225   12642 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:33.970599   12642 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:34.069001   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.088106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.088355   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:34.088380   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.088736   12642 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:34.088757   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.407852   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.508926   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.509671   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.609806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.907890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.006456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.006807   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.047745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.407857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.476382   12642 pod_ready.go:93] pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.476406   12642 pod_ready.go:82] duration metric: took 1.50577246s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.476429   12642 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480336   12642 pod_ready.go:93] pod "etcd-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.480359   12642 pod_ready.go:82] duration metric: took 3.921757ms for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480374   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484379   12642 pod_ready.go:93] pod "kube-apiserver-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.484399   12642 pod_ready.go:82] duration metric: took 4.01835ms for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484407   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488483   12642 pod_ready.go:93] pod "kube-controller-manager-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.488502   12642 pod_ready.go:82] duration metric: took 4.089026ms for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488513   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492259   12642 pod_ready.go:93] pod "kube-proxy-7grrw" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.492277   12642 pod_ready.go:82] duration metric: took 3.758267ms for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492286   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.508978   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.509276   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.548257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.875363   12642 pod_ready.go:93] pod "kube-scheduler-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.875387   12642 pod_ready.go:82] duration metric: took 383.093988ms for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.875399   12642 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.907718   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.006857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.047708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.407759   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.506231   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.506532   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.547623   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.908178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.009196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.009613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.111822   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.408212   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.507815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.508955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.597930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.899332   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.907966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.007593   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.007941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.096688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.407803   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.507008   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.507185   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.548820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.912820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.007788   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.007812   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.048263   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.506945   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.507715   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.548866   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.908787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.007032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.007632   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.048796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.398719   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:40.407487   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.507397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.507772   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.548227   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.908344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.009557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.009817   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.048882   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.407443   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.507386   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.507614   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.547783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.907344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.006438   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.006755   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.047817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.407604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.506506   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.506862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.548258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.880576   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:42.907125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.006570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.006955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.048271   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.407864   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.507257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.507492   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.548688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.907268   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.006139   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.006358   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.048808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.408058   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.506983   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.507322   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.548244   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.907777   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.007224   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.007575   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.048360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.381456   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:45.408061   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.507492   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.507642   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.548176   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.907279   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.006236   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.407829   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.507175   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.507613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.549215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.908356   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.007293   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.007559   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.098016   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.398953   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:47.408142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.507848   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.508575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.597783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.907504   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.006545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.047872   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.408467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.506796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.507040   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.548302   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.907911   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.007377   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.007799   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.048150   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.407649   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.506584   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.507145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.548392   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.881772   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:49.907684   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.006877   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.007616   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.048576   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.408384   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.509092   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.509234   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.548191   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.907565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.008280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.008548   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.048447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.407510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.506404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.547570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.900427   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:51.908013   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.008311   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.009178   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.098159   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.407616   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.506895   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.507402   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.548326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.907362   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.008415   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.009033   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.110477   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.408669   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.508937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.509320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.548259   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.907440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.006459   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.047766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.381253   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:54.408025   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.506984   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.548500   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.907545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.007055   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.007267   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.048307   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.407381   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.506329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.506924   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.547861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.907031   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.007920   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.048290   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.407755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.508288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.508534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.547447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.880835   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:56.907604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.008980   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.009246   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.048404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.408337   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.506591   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.506714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.547844   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.907931   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.007018   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.007364   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.048745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.407890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.506768   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.507350   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.548030   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.883327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:58.908144   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.008937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.010047   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.048751   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.407088   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.507067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.507939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.597408   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.907493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.006520   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.006934   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.407658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.507503   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.548304   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.908137   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.007637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.007838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.048049   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.381960   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:01.407780   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.506951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.507128   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.549865   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.908484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.009640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.009714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.047344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.407125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.506639   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.547791   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.908024   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.007189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.007861   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.048215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.408697   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.509655   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.509879   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.547998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.881604   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:03.907142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.006400   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.006547   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.047579   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.407594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.509746   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.510002   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.547819   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.907345   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.006657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.006921   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.048328   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.407535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.506637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.506876   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.548360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.881794   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:05.907547   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.006578   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.007101   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.047920   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.408051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.506012   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.506238   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.548610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.006786   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.007057   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.048484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.407806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.506692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.506986   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.548007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.907772   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.006701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.006970   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.047834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.394559   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:08.408017   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.507156   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.507728   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.597758   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.907919   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.007661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.098454   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.408318   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.509364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.510773   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.598483   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.908201   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.008441   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.009850   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.102292   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.398327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:10.408466   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.507500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.507925   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.548323   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.907708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.006815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.008091   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.047722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.407736   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.507196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.507427   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.599680   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.907752   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.007430   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.007699   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.047776   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.407516   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.506452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.506628   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.550195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.880927   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:12.907727   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.007178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.007457   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.407946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.507322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.507501   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.547784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.908011   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.007871   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.008085   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.049162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.407342   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.506366   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.507489   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.597388   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.881914   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:14.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.007276   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.008484   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.097577   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.407927   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.507867   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.508145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.548701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.909823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.012269   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.012490   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.112080   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.407823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.506640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.507038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.547677   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.908338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.006229   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.006500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.047433   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.380841   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:17.408141   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.507281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.507422   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.548306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.908216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.005946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.006253   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.048471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.407630   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.506857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.507586   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.547722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.908142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.007287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.007657   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.048873   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.399218   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:19.408522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.506838   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.506974   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.548754   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.907508   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.006666   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.007738   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.096885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.407683   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.507079   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.507594   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.549277   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.938821   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.007125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.007361   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.049052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.408461   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.506721   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.507045   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.548148   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.881149   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:21.907701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.007091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.007530   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.108828   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.408067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.507251   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.507505   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.549744   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.908512   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.006557   12642 kapi.go:107] duration metric: took 1m24.503572468s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:23.007211   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.050575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.408216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.507222   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.548029   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.881704   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:23.907636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.006951   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.048091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.407560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.506856   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.548705   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.907750   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.006941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.048097   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.408473   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.507086   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.548651   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.907834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.007469   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.415775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.417875   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:26.507746   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.549493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.908404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.009635   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.048391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.408105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.509068   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.548222   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.908042   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.007883   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.047932   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.408370   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.507379   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.548467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.898654   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:28.907039   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.007310   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.048105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.407790   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.507440   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.598195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.907810   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.007961   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.407748   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.548456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.908206   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.007623   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.048306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.380691   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:31.407719   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.506896   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.547878   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.907840   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.007212   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.048133   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.407238   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.506798   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.548528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.907455   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.006747   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.047570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.381514   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:33.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.506478   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.548374   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.907944   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.007347   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.048784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.408200   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.506244   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.548189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.907539   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.006862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.049282   12642 kapi.go:107] duration metric: took 1m35.505619997s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:25:35.407599   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.881121   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:35.907998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.007303   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.407476   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.506940   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.006647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.408081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.507464   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.908184   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.007201   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.381474   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:38.407986   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.508647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.908946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.008435   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.408471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.510473   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.995610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.008869   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.397632   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:40.408032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.509659   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.907933   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.007031   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.408056   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.508041   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.908287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.006885   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.407440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.880849   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:42.907379   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.008348   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.408661   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.907189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.006692   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.407965   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.507074   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.908416   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.006411   12642 kapi.go:107] duration metric: took 1m46.503572843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:45.381179   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:45.459019   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.907457   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.408510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.907182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.396594   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:47.407631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.908030   12642 kapi.go:107] duration metric: took 1m45.003803312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:47.909696   12642 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-821781 cluster.
	I0916 10:25:47.911374   12642 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:47.913470   12642 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:47.915138   12642 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, helm-tiller, metrics-server, storage-provisioner, cloud-spanner, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:47.916678   12642 addons.go:510] duration metric: took 1m55.100061322s for enable addons: enabled=[ingress-dns nvidia-device-plugin helm-tiller metrics-server storage-provisioner cloud-spanner yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:49.881225   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:52.381442   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:54.380287   12642 pod_ready.go:93] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.380308   12642 pod_ready.go:82] duration metric: took 1m18.504902601s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.380318   12642 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384430   12642 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.384450   12642 pod_ready.go:82] duration metric: took 4.126025ms for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384468   12642 pod_ready.go:39] duration metric: took 1m20.581229133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:25:54.384485   12642 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:25:54.384513   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:54.384564   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:54.417384   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.417411   12642 cri.go:89] found id: ""
	I0916 10:25:54.417421   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:54.417476   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.420785   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:54.420839   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:54.452868   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.452890   12642 cri.go:89] found id: ""
	I0916 10:25:54.452898   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:54.452950   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.456066   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:54.456119   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:54.487907   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:54.487930   12642 cri.go:89] found id: ""
	I0916 10:25:54.487938   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:54.487992   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.491215   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:54.491266   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:54.523745   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.523766   12642 cri.go:89] found id: ""
	I0916 10:25:54.523775   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:54.523831   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.527161   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:54.527229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:54.560095   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.560123   12642 cri.go:89] found id: ""
	I0916 10:25:54.560133   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:54.560180   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.563529   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:54.563589   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:54.596576   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:54.596600   12642 cri.go:89] found id: ""
	I0916 10:25:54.596608   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:54.596655   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.599825   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:54.599906   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:54.632507   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:54.632531   12642 cri.go:89] found id: ""
	I0916 10:25:54.632539   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:54.632620   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.635882   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:54.635906   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:54.698451   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:54.698492   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:54.799766   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:54.799797   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.843933   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:54.843963   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.894142   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:54.894174   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.934257   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:54.934288   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.967135   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:54.967163   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:55.001104   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:55.001133   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:55.013631   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:55.013663   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:55.047469   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:55.047499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:55.106750   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:55.106787   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:55.182277   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:55.182324   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:57.726595   12642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:25:57.740119   12642 api_server.go:72] duration metric: took 2m4.923540882s to wait for apiserver process to appear ...
	I0916 10:25:57.740152   12642 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:25:57.740187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:57.740229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:57.772533   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:57.772558   12642 cri.go:89] found id: ""
	I0916 10:25:57.772566   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:57.772615   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.775778   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:57.775838   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:57.813245   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:57.813271   12642 cri.go:89] found id: ""
	I0916 10:25:57.813281   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:57.813354   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.817691   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:57.817769   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:57.851306   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:57.851328   12642 cri.go:89] found id: ""
	I0916 10:25:57.851335   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:57.851378   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.854640   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:57.854706   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:57.904175   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:57.904198   12642 cri.go:89] found id: ""
	I0916 10:25:57.904205   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:57.904252   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.907938   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:57.907996   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:57.941402   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:57.941421   12642 cri.go:89] found id: ""
	I0916 10:25:57.941428   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:57.941481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.944741   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:57.944796   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:57.979020   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:57.979042   12642 cri.go:89] found id: ""
	I0916 10:25:57.979051   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:57.979108   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.982381   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:57.982431   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:58.014858   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:58.014881   12642 cri.go:89] found id: ""
	I0916 10:25:58.014890   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:58.014937   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:58.018251   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:58.018272   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:58.050812   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:58.050847   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:58.108286   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:58.108318   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:58.182964   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:58.183002   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:58.248089   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:58.248126   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:58.260293   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:58.260339   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:58.355509   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:58.355535   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:58.398314   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:58.398350   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:58.445703   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:58.445736   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:58.485997   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:58.486025   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:58.519971   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:58.519998   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:58.558470   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:58.558499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.092930   12642 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:26:01.096706   12642 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:26:01.097615   12642 api_server.go:141] control plane version: v1.31.1
	I0916 10:26:01.097635   12642 api_server.go:131] duration metric: took 3.357476241s to wait for apiserver health ...
	I0916 10:26:01.097642   12642 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:26:01.097662   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:26:01.097709   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:26:01.131450   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.131477   12642 cri.go:89] found id: ""
	I0916 10:26:01.131489   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:26:01.131542   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.134752   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:26:01.134813   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:26:01.166978   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.167002   12642 cri.go:89] found id: ""
	I0916 10:26:01.167014   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:26:01.167057   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.170770   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:26:01.170821   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:26:01.203544   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.203564   12642 cri.go:89] found id: ""
	I0916 10:26:01.203571   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:26:01.203632   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.207027   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:26:01.207101   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:26:01.240766   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.240787   12642 cri.go:89] found id: ""
	I0916 10:26:01.240795   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:26:01.240847   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.244187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:26:01.244242   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:26:01.278657   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.278686   12642 cri.go:89] found id: ""
	I0916 10:26:01.278696   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:26:01.278754   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.282264   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:26:01.282333   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:26:01.316408   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.316431   12642 cri.go:89] found id: ""
	I0916 10:26:01.316439   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:26:01.316481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.319848   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:26:01.319913   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:26:01.352617   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.352637   12642 cri.go:89] found id: ""
	I0916 10:26:01.352645   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:26:01.352692   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.356052   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:26:01.356078   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:26:01.430171   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:26:01.430203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:26:01.471970   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:26:01.472001   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.512405   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:26:01.512437   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.545482   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:26:01.545511   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:26:01.657458   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:26:01.657495   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.703167   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:26:01.703203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.753488   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:26:01.753528   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.788778   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:26:01.788809   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.847216   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:26:01.847252   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.883444   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:26:01.883479   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:26:01.950602   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:26:01.950637   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:26:04.473621   12642 system_pods.go:59] 19 kube-system pods found
	I0916 10:26:04.473667   12642 system_pods.go:61] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.473674   12642 system_pods.go:61] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.473678   12642 system_pods.go:61] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.473681   12642 system_pods.go:61] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.473685   12642 system_pods.go:61] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.473688   12642 system_pods.go:61] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.473692   12642 system_pods.go:61] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.473696   12642 system_pods.go:61] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.473699   12642 system_pods.go:61] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.473702   12642 system_pods.go:61] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.473706   12642 system_pods.go:61] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.473709   12642 system_pods.go:61] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.473712   12642 system_pods.go:61] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.473715   12642 system_pods.go:61] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.473718   12642 system_pods.go:61] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.473722   12642 system_pods.go:61] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.473725   12642 system_pods.go:61] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.473728   12642 system_pods.go:61] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.473731   12642 system_pods.go:61] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.473737   12642 system_pods.go:74] duration metric: took 3.376089349s to wait for pod list to return data ...
	I0916 10:26:04.473747   12642 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:26:04.476243   12642 default_sa.go:45] found service account: "default"
	I0916 10:26:04.476265   12642 default_sa.go:55] duration metric: took 2.512507ms for default service account to be created ...
	I0916 10:26:04.476273   12642 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:26:04.484719   12642 system_pods.go:86] 19 kube-system pods found
	I0916 10:26:04.484756   12642 system_pods.go:89] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.484762   12642 system_pods.go:89] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.484766   12642 system_pods.go:89] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.484770   12642 system_pods.go:89] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.484774   12642 system_pods.go:89] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.484778   12642 system_pods.go:89] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.484782   12642 system_pods.go:89] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.484786   12642 system_pods.go:89] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.484790   12642 system_pods.go:89] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.484796   12642 system_pods.go:89] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.484800   12642 system_pods.go:89] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.484803   12642 system_pods.go:89] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.484807   12642 system_pods.go:89] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.484812   12642 system_pods.go:89] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.484818   12642 system_pods.go:89] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.484822   12642 system_pods.go:89] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.484826   12642 system_pods.go:89] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.484830   12642 system_pods.go:89] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.484834   12642 system_pods.go:89] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.484840   12642 system_pods.go:126] duration metric: took 8.563189ms to wait for k8s-apps to be running ...
	I0916 10:26:04.484851   12642 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:26:04.484897   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:26:04.496212   12642 system_svc.go:56] duration metric: took 11.351945ms WaitForService to wait for kubelet
	I0916 10:26:04.496239   12642 kubeadm.go:582] duration metric: took 2m11.67966753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:26:04.496261   12642 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:26:04.499350   12642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:26:04.499377   12642 node_conditions.go:123] node cpu capacity is 8
	I0916 10:26:04.499389   12642 node_conditions.go:105] duration metric: took 3.122952ms to run NodePressure ...
	I0916 10:26:04.499400   12642 start.go:241] waiting for startup goroutines ...
	I0916 10:26:04.499406   12642 start.go:246] waiting for cluster config update ...
	I0916 10:26:04.499455   12642 start.go:255] writing updated cluster config ...
	I0916 10:26:04.519561   12642 ssh_runner.go:195] Run: rm -f paused
	I0916 10:26:04.665202   12642 out.go:177] * Done! kubectl is now configured to use "addons-821781" cluster and "default" namespace by default
	E0916 10:26:04.666644   12642 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:25:47 addons-821781 crio[1028]: time="2024-09-16 10:25:47.674950972Z" level=info msg="Started container" PID=6713 containerID=0dbc187486a77d691a5db4775360d83cdf6dd7084d4c3bd9123b7e051fd6bd74 description=gcp-auth/gcp-auth-89d5ffd79-b6kzx/gcp-auth id=3405075e-6f21-4717-9790-28e95b21db75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=754882dcda596fac25a1f61a5da2a093e20801c47119e6ab0dffa11af087ccac
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.315830914Z" level=info msg="Stopping container: c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a (timeout: 30s)" id=ebea5d92-5d65-40cf-8a27-a36311da1c36 name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:26:17 addons-821781 conmon[3824]: conmon c2005114512cfcc46499 <ninfo>: container 3836 exited with status 2
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.326344684Z" level=info msg="Stopping container: 3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78 (timeout: 30s)" id=a960a039-c370-40ed-904d-d4b090c7e6aa name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.452255692Z" level=info msg="Stopped container c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a: kube-system/registry-66c9cd494c-48kvj/registry" id=ebea5d92-5d65-40cf-8a27-a36311da1c36 name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.452971524Z" level=info msg="Stopping pod sandbox: 4c1a5715ac4e07f78ef5a85f9fa1657c63febcae095f832363ad71fffd1a602f" id=898e8325-58f3-496f-96ce-a2423e47d89d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.453234284Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-48kvj Namespace:kube-system ID:4c1a5715ac4e07f78ef5a85f9fa1657c63febcae095f832363ad71fffd1a602f UID:36c41e69-8354-4fce-98a3-99b23a9ab570 NetNS:/var/run/netns/b897c491-69d2-46e3-811d-1b117e93ce08 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.453442375Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-48kvj from CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.469509210Z" level=info msg="Stopped container 3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78: kube-system/registry-proxy-hbwdk/registry-proxy" id=a960a039-c370-40ed-904d-d4b090c7e6aa name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.470036200Z" level=info msg="Stopping pod sandbox: fca44caa17cf40bcbfdbef53d03b2b58709c863069b859bca01c67f0bbda472b" id=7e4b169c-b895-47ad-a057-d2caa0ee5104 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.474391796Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-2WLPIE7V726JIYOM - [0:0]\n:KUBE-HP-KGF7YVBSCC3IFBXU - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-ZBXNL255AJZ5ULLK - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-8jlsc_ingress-nginx_c4a6e49a-36e5-4187-a1d5-ff337b562029_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-2WLPIE7V726JIYOM\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-8jlsc_ingress-nginx_c4a6e49a-36e5-4187-a1d5-ff337b562029_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-KGF7YVBSCC3IFBXU\n-A KUBE-HP-2WLPIE7V726JIYOM -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-8jlsc_ingress-nginx_c4a6e49a-36e5-4187-a1d5-ff337b562029_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-2WLPIE7V726JIYOM -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-8jlsc_ingress-nginx_c4a6e49a-36e5-4187-a
1d5-ff337b562029_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-A KUBE-HP-KGF7YVBSCC3IFBXU -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-8jlsc_ingress-nginx_c4a6e49a-36e5-4187-a1d5-ff337b562029_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-KGF7YVBSCC3IFBXU -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-8jlsc_ingress-nginx_c4a6e49a-36e5-4187-a1d5-ff337b562029_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-X KUBE-HP-ZBXNL255AJZ5ULLK\nCOMMIT\n"
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.477316257Z" level=info msg="Closing host port tcp:5000"
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.479108097Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.479359753Z" level=info msg="Got pod network &{Name:registry-proxy-hbwdk Namespace:kube-system ID:fca44caa17cf40bcbfdbef53d03b2b58709c863069b859bca01c67f0bbda472b UID:44cd3bc9-5996-4fb6-b54d-fe98c6c50a75 NetNS:/var/run/netns/4c871b9e-d69f-450f-b243-9ad616c9988f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.479530812Z" level=info msg="Deleting pod kube-system_registry-proxy-hbwdk from CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.494122129Z" level=info msg="Stopped pod sandbox: 4c1a5715ac4e07f78ef5a85f9fa1657c63febcae095f832363ad71fffd1a602f" id=898e8325-58f3-496f-96ce-a2423e47d89d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.514874736Z" level=info msg="Stopped pod sandbox: fca44caa17cf40bcbfdbef53d03b2b58709c863069b859bca01c67f0bbda472b" id=7e4b169c-b895-47ad-a057-d2caa0ee5104 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.798565047Z" level=info msg="Removing container: c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a" id=675c4c8d-4554-4570-a670-3a001ddc1e8b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.814279226Z" level=info msg="Removed container c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a: kube-system/registry-66c9cd494c-48kvj/registry" id=675c4c8d-4554-4570-a670-3a001ddc1e8b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.816884890Z" level=info msg="Removing container: 3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78" id=7a1b2477-5c22-494d-96bb-780b5af9b6c4 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:26:17 addons-821781 crio[1028]: time="2024-09-16 10:26:17.840441093Z" level=info msg="Removed container 3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78: kube-system/registry-proxy-hbwdk/registry-proxy" id=7a1b2477-5c22-494d-96bb-780b5af9b6c4 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:26:18 addons-821781 crio[1028]: time="2024-09-16 10:26:18.107518682Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec" id=14ee48b1-b959-4dd7-a2b2-0f9d730a7f19 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:18 addons-821781 crio[1028]: time="2024-09-16 10:26:18.107865686Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec ghcr.io/inspektor-gadget/inspektor-gadget@sha256:80a3bcbb29ca0fd2aae79ec8aad1e690dd02c7616a34e723a03fd5160888135c],Size_:176758647,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=14ee48b1-b959-4dd7-a2b2-0f9d730a7f19 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:18 addons-821781 crio[1028]: time="2024-09-16 10:26:18.108393344Z" level=info msg="Pulling image: ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec" id=c0b3cd48-544d-4999-9f2f-331f1f66ae7f name=/runtime.v1.ImageService/PullImage
	Sep 16 10:26:18 addons-821781 crio[1028]: time="2024-09-16 10:26:18.112716916Z" level=info msg="Trying to access \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	0dbc187486a77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 30 seconds ago       Running             gcp-auth                                 0                   754882dcda596       gcp-auth-89d5ffd79-b6kzx
	3603c45c1e4ab       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             34 seconds ago       Running             controller                               0                   31855714f04d8       ingress-nginx-controller-bc57996ff-8jlsc
	b6501ff69088d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          44 seconds ago       Running             csi-snapshotter                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	85a5122ba30eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          45 seconds ago       Running             csi-provisioner                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	33527f5387a55       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            47 seconds ago       Running             liveness-probe                           0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	2b3dcba2a09e7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           48 seconds ago       Running             hostpath                                 0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ea5a7e7486ae3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                49 seconds ago       Running             node-driver-registrar                    0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	db9122887911e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            51 seconds ago       Exited              gadget                                   3                   300e5b8a22c3e       gadget-fmlhp
	5247d23b3a397       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      51 seconds ago       Running             volume-snapshot-controller               0                   5faba155231dd       snapshot-controller-56fcc65765-tdxm7
	68547a0643ba6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              51 seconds ago       Running             csi-resizer                              0                   4cb61d4296010       csi-hostpath-resizer-0
	a2eec9453e9d3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             53 seconds ago       Running             csi-attacher                             0                   205f02ffaeb65       csi-hostpath-attacher-0
	d3033819602e2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   54 seconds ago       Running             csi-external-health-monitor-controller   0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	3a0120cc473d1       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             56 seconds ago       Exited              patch                                    2                   828500afdd55e       gcp-auth-certs-patch-r7gss
	68ea8b735b964       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              create                                   0                   0c61fa457ab4b       gcp-auth-certs-create-frrll
	ffffb6d23a520       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              patch                                    0                   0defdefc8e690       ingress-nginx-admission-patch-22v56
	adcb6aad69051       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   b44ff8bf56a7c       snapshot-controller-56fcc65765-b752p
	d7c74998aab32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              create                                   0                   92efe213e3cc9       ingress-nginx-admission-create-dgb9n
	0a51b16943475       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   f45eff018c007       nvidia-device-plugin-daemonset-fs477
	b990bd791612e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   587e47b3a6ff4       yakd-dashboard-67d98fc6b-sp84b
	318be751079db       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   cdfaa5befff59       local-path-provisioner-86d989889c-6xhgj
	960e66cd3823f       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  About a minute ago   Running             tiller                                   0                   5f0be722b34e2       tiller-deploy-b48cc5f79-jcsqv
	2a650198714d3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        About a minute ago   Running             metrics-server                           0                   a92ded8c2c84e       metrics-server-84c5f94fbc-t6sfx
	a04fa37b5df26       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               About a minute ago   Running             cloud-spanner-emulator                   0                   5511b102a4056       cloud-spanner-emulator-769b77f747-hpwnk
	9db25418c7b36       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             About a minute ago   Running             minikube-ingress-dns                     0                   0a160d796662b       kube-ingress-dns-minikube
	fd1c0fa2e8742       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   578052293e511       storage-provisioner
	5fc078f948938       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             About a minute ago   Running             coredns                                  0                   dd25c29f2c98b       coredns-7c65d6cfc9-f6b44
	8953bd3ac9bbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             2 minutes ago        Running             kube-proxy                               0                   31612ec902e41       kube-proxy-7grrw
	e3e02e9338f21       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             2 minutes ago        Running             kindnet-cni                              0                   efca226e04346       kindnet-2bwl4
	f7c9dd60c650e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             2 minutes ago        Running             kube-apiserver                           0                   325d1d3961d30       kube-apiserver-addons-821781
	aef3299386ef0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             2 minutes ago        Running             etcd                                     0                   5db6677261478       etcd-addons-821781
	23817b3f6401e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             2 minutes ago        Running             kube-scheduler                           0                   192ccdf49d648       kube-scheduler-addons-821781
	319dfee9ab334       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             2 minutes ago        Running             kube-controller-manager                  0                   471807181e888       kube-controller-manager-addons-821781
	
	
	==> coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] <==
	[INFO] 10.244.0.11:54433 - 5196 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117872s
	[INFO] 10.244.0.11:55203 - 39009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079023s
	[INFO] 10.244.0.11:55203 - 18278 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066179s
	[INFO] 10.244.0.11:53992 - 3361 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005725192s
	[INFO] 10.244.0.11:53992 - 5182 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005902528s
	[INFO] 10.244.0.11:58640 - 39752 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005962306s
	[INFO] 10.244.0.11:58640 - 45636 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007442692s
	[INFO] 10.244.0.11:58081 - 46876 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004814518s
	[INFO] 10.244.0.11:58081 - 7960 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005069952s
	[INFO] 10.244.0.11:56786 - 21825 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084442s
	[INFO] 10.244.0.11:56786 - 8540 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121405s
	[INFO] 10.244.0.21:49162 - 58748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183854s
	[INFO] 10.244.0.21:60540 - 21143 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264439s
	[INFO] 10.244.0.21:57612 - 22108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123843s
	[INFO] 10.244.0.21:56370 - 29690 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174744s
	[INFO] 10.244.0.21:53939 - 42345 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115165s
	[INFO] 10.244.0.21:54191 - 30184 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102696s
	[INFO] 10.244.0.21:43721 - 49242 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007714914s
	[INFO] 10.244.0.21:58502 - 61297 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008280312s
	[INFO] 10.244.0.21:45585 - 36043 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008154564s
	[INFO] 10.244.0.21:50514 - 10749 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008661461s
	[INFO] 10.244.0.21:41083 - 31758 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006832696s
	[INFO] 10.244.0.21:53762 - 8306 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007439813s
	[INFO] 10.244.0.21:37796 - 13809 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002178233s
	[INFO] 10.244.0.21:36516 - 28559 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002337896s
	
	
	==> describe nodes <==
	Name:               addons-821781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-821781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-821781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-821781
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-821781"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-821781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:26:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-821781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a93a1abfd8e74fb89ecb0b25fd80b840
	  System UUID:                c474d608-aa29-4551-b357-d17e9479a01d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-hpwnk     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-fmlhp                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  gcp-auth                    gcp-auth-89d5ffd79-b6kzx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8jlsc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m20s
	  kube-system                 coredns-7c65d6cfc9-f6b44                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m26s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 csi-hostpathplugin-pwtwp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 etcd-addons-821781                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m31s
	  kube-system                 kindnet-2bwl4                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-addons-821781                250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-addons-821781       200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-7grrw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-addons-821781                100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 metrics-server-84c5f94fbc-t6sfx             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m21s
	  kube-system                 nvidia-device-plugin-daemonset-fs477        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 snapshot-controller-56fcc65765-b752p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 snapshot-controller-56fcc65765-tdxm7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 tiller-deploy-b48cc5f79-jcsqv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  local-path-storage          local-path-provisioner-86d989889c-6xhgj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-sp84b              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m25s  kube-proxy       
	  Normal   Starting                 2m31s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m31s  kubelet          Node addons-821781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s  kubelet          Node addons-821781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s  kubelet          Node addons-821781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m27s  node-controller  Node addons-821781 event: Registered Node addons-821781 in Controller
	  Normal   NodeReady                105s   kubelet          Node addons-821781 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] <==
	{"level":"warn","ts":"2024-09-16T10:24:33.965134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.284694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-09-16T10:24:33.965140Z","caller":"traceutil/trace.go:171","msg":"trace[589393049] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.482158ms","start":"2024-09-16T10:24:33.834652Z","end":"2024-09-16T10:24:33.965134Z","steps":["trace[589393049] 'agreement among raft nodes before linearized reading'  (duration: 130.392783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.112983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs\" ","response":"range_response_count:1 size:560"}
	{"level":"warn","ts":"2024-09-16T10:24:33.965172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.412831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/default\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964790Z","caller":"traceutil/trace.go:171","msg":"trace[1719481168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:1; response_revision:871; }","duration":"130.308398ms","start":"2024-09-16T10:24:33.834475Z","end":"2024-09-16T10:24:33.964784Z","steps":["trace[1719481168] 'agreement among raft nodes before linearized reading'  (duration: 130.231604ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965031Z","caller":"traceutil/trace.go:171","msg":"trace[1439753586] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:871; }","duration":"130.351105ms","start":"2024-09-16T10:24:33.834675Z","end":"2024-09-16T10:24:33.965026Z","steps":["trace[1439753586] 'agreement among raft nodes before linearized reading'  (duration: 130.285964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.622694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
	{"level":"info","ts":"2024-09-16T10:24:33.965260Z","caller":"traceutil/trace.go:171","msg":"trace[3301844] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.644948ms","start":"2024-09-16T10:24:33.834605Z","end":"2024-09-16T10:24:33.965250Z","steps":["trace[3301844] 'agreement among raft nodes before linearized reading'  (duration: 130.58562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.745393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:1 size:878"}
	{"level":"info","ts":"2024-09-16T10:24:33.965091Z","caller":"traceutil/trace.go:171","msg":"trace[630312888] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.242708ms","start":"2024-09-16T10:24:33.834842Z","end":"2024-09-16T10:24:33.965085Z","steps":["trace[630312888] 'agreement among raft nodes before linearized reading'  (duration: 130.2013ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965306Z","caller":"traceutil/trace.go:171","msg":"trace[687212945] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:1; response_revision:871; }","duration":"130.768911ms","start":"2024-09-16T10:24:33.834532Z","end":"2024-09-16T10:24:33.965301Z","steps":["trace[687212945] 'agreement among raft nodes before linearized reading'  (duration: 130.728326ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965159Z","caller":"traceutil/trace.go:171","msg":"trace[1851867066] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:871; }","duration":"130.30942ms","start":"2024-09-16T10:24:33.834844Z","end":"2024-09-16T10:24:33.965154Z","steps":["trace[1851867066] 'agreement among raft nodes before linearized reading'  (duration: 130.267065ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965180Z","caller":"traceutil/trace.go:171","msg":"trace[395277833] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.138451ms","start":"2024-09-16T10:24:33.835036Z","end":"2024-09-16T10:24:33.965175Z","steps":["trace[395277833] 'agreement among raft nodes before linearized reading'  (duration: 130.084008ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.964761Z","caller":"traceutil/trace.go:171","msg":"trace[1846466404] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.050288ms","start":"2024-09-16T10:24:33.834699Z","end":"2024-09-16T10:24:33.964750Z","steps":["trace[1846466404] 'agreement among raft nodes before linearized reading'  (duration: 129.823354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.867331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964791Z","caller":"traceutil/trace.go:171","msg":"trace[1570104672] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"101.79293ms","start":"2024-09-16T10:24:33.862992Z","end":"2024-09-16T10:24:33.964785Z","steps":["trace[1570104672] 'agreement among raft nodes before linearized reading'  (duration: 101.763738ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965421Z","caller":"traceutil/trace.go:171","msg":"trace[1827982125] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:871; }","duration":"130.890995ms","start":"2024-09-16T10:24:33.834525Z","end":"2024-09-16T10:24:33.965416Z","steps":["trace[1827982125] 'agreement among raft nodes before linearized reading'  (duration: 130.852764ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965209Z","caller":"traceutil/trace.go:171","msg":"trace[945447364] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.449227ms","start":"2024-09-16T10:24:33.834754Z","end":"2024-09-16T10:24:33.965203Z","steps":["trace[945447364] 'agreement among raft nodes before linearized reading'  (duration: 130.396497ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.001003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-16T10:24:33.965579Z","caller":"traceutil/trace.go:171","msg":"trace[1490541276] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:871; }","duration":"131.063942ms","start":"2024-09-16T10:24:33.834502Z","end":"2024-09-16T10:24:33.965566Z","steps":["trace[1490541276] 'agreement among raft nodes before linearized reading'  (duration: 130.98224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.964852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.18611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/snapshot-controller\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2024-09-16T10:24:33.965093Z","caller":"traceutil/trace.go:171","msg":"trace[1524858032] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"129.821011ms","start":"2024-09-16T10:24:33.835267Z","end":"2024-09-16T10:24:33.965088Z","steps":["trace[1524858032] 'agreement among raft nodes before linearized reading'  (duration: 129.760392ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965632Z","caller":"traceutil/trace.go:171","msg":"trace[945136232] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/snapshot-controller; range_end:; response_count:1; response_revision:871; }","duration":"129.963575ms","start":"2024-09-16T10:24:33.835661Z","end":"2024-09-16T10:24:33.965624Z","steps":["trace[945136232] 'agreement among raft nodes before linearized reading'  (duration: 129.14136ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:26.413976Z","caller":"traceutil/trace.go:171","msg":"trace[182413184] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"129.574416ms","start":"2024-09-16T10:25:26.284376Z","end":"2024-09-16T10:25:26.413950Z","steps":["trace[182413184] 'process raft request'  (duration: 67.733345ms)","trace[182413184] 'compare'  (duration: 61.701552ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:48.300626Z","caller":"traceutil/trace.go:171","msg":"trace[869038067] transaction","detail":"{read_only:false; response_revision:1265; number_of_response:1; }","duration":"110.748846ms","start":"2024-09-16T10:25:48.189856Z","end":"2024-09-16T10:25:48.300605Z","steps":["trace[869038067] 'process raft request'  (duration: 107.391476ms)"],"step_count":1}
	
	
	==> gcp-auth [0dbc187486a77d691a5db4775360d83cdf6dd7084d4c3bd9123b7e051fd6bd74] <==
	2024/09/16 10:25:47 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:26:18 up 8 min,  0 users,  load average: 1.22, 0.71, 0.30
	Linux addons-821781 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] <==
	I0916 10:24:24.599610       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:24:24.599644       1 metrics.go:61] Registering metrics
	I0916 10:24:24.599709       1 controller.go:374] Syncing nftables rules
	I0916 10:24:33.305400       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:24:33.305440       1 main.go:299] handling current node
	I0916 10:24:43.298852       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:24:43.298884       1 main.go:299] handling current node
	I0916 10:24:53.298500       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:24:53.298540       1 main.go:299] handling current node
	I0916 10:25:03.298433       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:03.298473       1 main.go:299] handling current node
	I0916 10:25:13.302332       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:13.302385       1 main.go:299] handling current node
	I0916 10:25:23.298374       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:23.298404       1 main.go:299] handling current node
	I0916 10:25:33.299058       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:33.299118       1 main.go:299] handling current node
	I0916 10:25:43.305413       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:43.305453       1 main.go:299] handling current node
	I0916 10:25:53.299376       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:53.299407       1 main.go:299] handling current node
	I0916 10:26:03.303024       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:03.303056       1 main.go:299] handling current node
	I0916 10:26:13.305426       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:13.305472       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] <==
	I0916 10:23:59.467448       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0916 10:23:59.519690       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.136.167"}
	I0916 10:24:02.751335       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.58.20"}
	W0916 10:24:33.565907       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	W0916 10:24:33.565951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.565953       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	E0916 10:24:33.565979       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:33.599472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.599513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:58.720213       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 10:24:58.720232       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:58.720259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 10:24:58.720301       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:24:58.721354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 10:24:58.721362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:25:54.202103       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:25:54.202136       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.74.143:443: connect: connection refused" logger="UnhandledError"
	E0916 10:25:54.202195       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:25:54.215066       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] <==
	I0916 10:25:24.560895       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:25:24.567197       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:25:24.572256       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:25:27.609080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="6.031614ms"
	I0916 10:25:27.609183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="68.217µs"
	I0916 10:25:31.323381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-821781"
	I0916 10:25:32.530651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="5.979996ms"
	I0916 10:25:32.530774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="81.533µs"
	I0916 10:25:44.728709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="77.851µs"
	I0916 10:25:47.538412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="7.00218ms"
	I0916 10:25:47.538515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="67.443µs"
	I0916 10:25:47.740539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="5.014444ms"
	I0916 10:25:47.740618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="47.984µs"
	I0916 10:25:48.086555       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:25:48.313185       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:25:49.714903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-821781"
	E0916 10:25:51.912673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 10:25:52.312723       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 10:25:54.011326       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:25:54.030436       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:25:54.196717       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="6.665791ms"
	I0916 10:25:54.196879       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="74.235µs"
	I0916 10:25:57.806704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.152323ms"
	I0916 10:25:57.807074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="97.304µs"
	I0916 10:26:17.302554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.099µs"
	
	
	==> kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] <==
	I0916 10:23:52.638596       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:52.921753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:52.922374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:53.313675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:53.319718       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:53.497957       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:53.508623       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:53.508659       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:53.510794       1 config.go:199] "Starting service config controller"
	I0916 10:23:53.510833       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:53.510868       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:53.510874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:53.511480       1 config.go:328] "Starting node config controller"
	I0916 10:23:53.511491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:53.617474       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:53.617556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:23:53.711794       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] <==
	W0916 10:23:44.897301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0916 10:23:44.897124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:44.898296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:44.897140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:44.898337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:44.898344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:45.722888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:45.722927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.731239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.731280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.734491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:23:45.734527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.741804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.741845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.771121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:45.771158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.886831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.886867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.913242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.913290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:46.023935       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:46.023972       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:23:48.220429       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:25:57 addons-821781 kubelet[1623]: E0916 10:25:57.215641    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482357215444853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:469506,},InodesUsed:&UInt64Value{Value:188,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:04 addons-821781 kubelet[1623]: I0916 10:26:04.107697    1623 scope.go:117] "RemoveContainer" containerID="db9122887911e02d79edcf9cc44e18f12e57d7a8bfb82088f4161b7720f49875"
	Sep 16 10:26:04 addons-821781 kubelet[1623]: E0916 10:26:04.107884    1623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-fmlhp_gadget(2432b1c2-ccad-4646-9941-b5be3a66cf1b)\"" pod="gadget/gadget-fmlhp" podUID="2432b1c2-ccad-4646-9941-b5be3a66cf1b"
	Sep 16 10:26:07 addons-821781 kubelet[1623]: E0916 10:26:07.217418    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482367217222990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:469506,},InodesUsed:&UInt64Value{Value:188,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:07 addons-821781 kubelet[1623]: E0916 10:26:07.217456    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482367217222990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:469506,},InodesUsed:&UInt64Value{Value:188,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:11 addons-821781 kubelet[1623]: I0916 10:26:11.107243    1623 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-48kvj" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: E0916 10:26:17.219421    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482377219266247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:469506,},InodesUsed:&UInt64Value{Value:188,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: E0916 10:26:17.219457    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482377219266247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:469506,},InodesUsed:&UInt64Value{Value:188,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.599361    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4fnv\" (UniqueName: \"kubernetes.io/projected/36c41e69-8354-4fce-98a3-99b23a9ab570-kube-api-access-x4fnv\") pod \"36c41e69-8354-4fce-98a3-99b23a9ab570\" (UID: \"36c41e69-8354-4fce-98a3-99b23a9ab570\") "
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.601240    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c41e69-8354-4fce-98a3-99b23a9ab570-kube-api-access-x4fnv" (OuterVolumeSpecName: "kube-api-access-x4fnv") pod "36c41e69-8354-4fce-98a3-99b23a9ab570" (UID: "36c41e69-8354-4fce-98a3-99b23a9ab570"). InnerVolumeSpecName "kube-api-access-x4fnv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.700602    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz27g\" (UniqueName: \"kubernetes.io/projected/44cd3bc9-5996-4fb6-b54d-fe98c6c50a75-kube-api-access-qz27g\") pod \"44cd3bc9-5996-4fb6-b54d-fe98c6c50a75\" (UID: \"44cd3bc9-5996-4fb6-b54d-fe98c6c50a75\") "
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.700714    1623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x4fnv\" (UniqueName: \"kubernetes.io/projected/36c41e69-8354-4fce-98a3-99b23a9ab570-kube-api-access-x4fnv\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.703657    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44cd3bc9-5996-4fb6-b54d-fe98c6c50a75-kube-api-access-qz27g" (OuterVolumeSpecName: "kube-api-access-qz27g") pod "44cd3bc9-5996-4fb6-b54d-fe98c6c50a75" (UID: "44cd3bc9-5996-4fb6-b54d-fe98c6c50a75"). InnerVolumeSpecName "kube-api-access-qz27g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.797558    1623 scope.go:117] "RemoveContainer" containerID="c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.801210    1623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qz27g\" (UniqueName: \"kubernetes.io/projected/44cd3bc9-5996-4fb6-b54d-fe98c6c50a75-kube-api-access-qz27g\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.814956    1623 scope.go:117] "RemoveContainer" containerID="c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: E0916 10:26:17.815464    1623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a\": container with ID starting with c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a not found: ID does not exist" containerID="c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.815519    1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a"} err="failed to get container status \"c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a\": rpc error: code = NotFound desc = could not find container \"c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a\": container with ID starting with c2005114512cfcc46499d3a3d9005d92e233839e58283999f8943f16a48fae0a not found: ID does not exist"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.815591    1623 scope.go:117] "RemoveContainer" containerID="3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.840756    1623 scope.go:117] "RemoveContainer" containerID="3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: E0916 10:26:17.841253    1623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78\": container with ID starting with 3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78 not found: ID does not exist" containerID="3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78"
	Sep 16 10:26:17 addons-821781 kubelet[1623]: I0916 10:26:17.841297    1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78"} err="failed to get container status \"3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78\": rpc error: code = NotFound desc = could not find container \"3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78\": container with ID starting with 3eea583cc3d10534ad0a851dcc4411e7ae5c9dffab0997d8342e189dafbd6e78 not found: ID does not exist"
	Sep 16 10:26:18 addons-821781 kubelet[1623]: I0916 10:26:18.106941    1623 scope.go:117] "RemoveContainer" containerID="db9122887911e02d79edcf9cc44e18f12e57d7a8bfb82088f4161b7720f49875"
	Sep 16 10:26:19 addons-821781 kubelet[1623]: I0916 10:26:19.108774    1623 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36c41e69-8354-4fce-98a3-99b23a9ab570" path="/var/lib/kubelet/pods/36c41e69-8354-4fce-98a3-99b23a9ab570/volumes"
	Sep 16 10:26:19 addons-821781 kubelet[1623]: I0916 10:26:19.109307    1623 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44cd3bc9-5996-4fb6-b54d-fe98c6c50a75" path="/var/lib/kubelet/pods/44cd3bc9-5996-4fb6-b54d-fe98c6c50a75/volumes"
	
	
	==> storage-provisioner [fd1c0fa2e8742125904216a45b6d84f9b367888422cb6083d3e482fd77452994] <==
	I0916 10:24:34.797513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:34.805288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:34.805397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:34.813404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:34.813588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	I0916 10:24:34.814304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d6ca95d-581a-4537-b803-ac9e02f43ec1", APIVersion:"v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4 became leader
	I0916 10:24:34.914571       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-821781 -n addons-821781
helpers_test.go:261: (dbg) Run:  kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (350.229µs)
helpers_test.go:263: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Registry (12.89s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-821781 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:209: (dbg) Non-zero exit: kubectl --context addons-821781 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: fork/exec /usr/local/bin/kubectl: exec format error (344.442µs)
addons_test.go:210: failed waiting for ingress-nginx-controller : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-821781
helpers_test.go:235: (dbg) docker inspect addons-821781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9",
	        "Created": "2024-09-16T10:23:34.422231958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:34.564816551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hosts",
	        "LogPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9-json.log",
	        "Name": "/addons-821781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-821781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-821781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-821781",
	                "Source": "/var/lib/docker/volumes/addons-821781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-821781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-821781",
	                "name.minikube.sigs.k8s.io": "addons-821781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb89cb54fc4711f104a02c8d2ebaaa0dae68769e21054477c7dd719ee876c61d",
	            "SandboxKey": "/var/run/docker/netns/cb89cb54fc47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-821781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "66d8d4a2fe0f9ff012a57288f3992a27df27bc2a73eb33a40ff3adbc0fa270ea",
	                    "EndpointID": "54da588c62c62ca60fdaac7dbe299e76b7fad63e791a3bfc770a096d3640b2fb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-821781",
	                        "60dd933522c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-821781 -n addons-821781
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-821781 logs -n 25: (1.232702243s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-534059              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-920673              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-291625               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-291625            | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-597115                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44611               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-597115              | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | disable dashboard -p                 | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| start   | -p addons-821781 --wait=true         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:26 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| ip      | addons-821781 ip                     | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:11.785613   12642 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:11.786005   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786020   12642 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:11.786026   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786201   12642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:23:11.786846   12642 out.go:352] Setting JSON to false
	I0916 10:23:11.787652   12642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":332,"bootTime":1726481860,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:11.787744   12642 start.go:139] virtualization: kvm guest
	I0916 10:23:11.789971   12642 out.go:177] * [addons-821781] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:11.791581   12642 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:11.791602   12642 notify.go:220] Checking for updates...
	I0916 10:23:11.793279   12642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:11.794876   12642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:11.796234   12642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:23:11.797605   12642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:11.798881   12642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:11.800381   12642 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:11.822354   12642 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:11.822435   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.875294   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.865218731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.875392   12642 docker.go:318] overlay module found
	I0916 10:23:11.877179   12642 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:11.878539   12642 start.go:297] selected driver: docker
	I0916 10:23:11.878555   12642 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:11.878567   12642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:11.879376   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.928080   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.918595521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.928248   12642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:11.928460   12642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:11.930314   12642 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:11.931824   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:11.931880   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:11.931896   12642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:11.931970   12642 start.go:340] cluster config:
	{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:11.933478   12642 out.go:177] * Starting "addons-821781" primary control-plane node in "addons-821781" cluster
	I0916 10:23:11.934979   12642 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:23:11.936645   12642 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:11.938033   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:11.938077   12642 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:23:11.938086   12642 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:11.938151   12642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:11.938181   12642 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:11.938195   12642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:23:11.938528   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:11.938559   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json: {Name:mkb2d65543ac9e0f1211fb3bb619eaf59705ab34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:11.954455   12642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:11.954550   12642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:11.954565   12642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:11.954570   12642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:11.954578   12642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:11.954585   12642 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:24.468174   12642 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:24.468219   12642 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:24.468270   12642 start.go:360] acquireMachinesLock for addons-821781: {Name:mk2b69b21902e1a037d888f1a4c14b20c068c000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:24.468392   12642 start.go:364] duration metric: took 101µs to acquireMachinesLock for "addons-821781"
	I0916 10:23:24.468422   12642 start.go:93] Provisioning new machine with config: &{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:24.468511   12642 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:24.470800   12642 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:24.471033   12642 start.go:159] libmachine.API.Create for "addons-821781" (driver="docker")
	I0916 10:23:24.471057   12642 client.go:168] LocalClient.Create starting
	I0916 10:23:24.471161   12642 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:23:24.563569   12642 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:23:24.843226   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:24.859906   12642 cli_runner.go:211] docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:24.859982   12642 network_create.go:284] running [docker network inspect addons-821781] to gather additional debugging logs...
	I0916 10:23:24.860006   12642 cli_runner.go:164] Run: docker network inspect addons-821781
	W0916 10:23:24.875695   12642 cli_runner.go:211] docker network inspect addons-821781 returned with exit code 1
	I0916 10:23:24.875725   12642 network_create.go:287] error running [docker network inspect addons-821781]: docker network inspect addons-821781: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-821781 not found
	I0916 10:23:24.875736   12642 network_create.go:289] output of [docker network inspect addons-821781]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-821781 not found
	
	** /stderr **
	I0916 10:23:24.875825   12642 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:24.892396   12642 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019c5ea0}
	I0916 10:23:24.892450   12642 network_create.go:124] attempt to create docker network addons-821781 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:24.892494   12642 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-821781 addons-821781
	I0916 10:23:24.956362   12642 network_create.go:108] docker network addons-821781 192.168.49.0/24 created
	I0916 10:23:24.956397   12642 kic.go:121] calculated static IP "192.168.49.2" for the "addons-821781" container
	I0916 10:23:24.956461   12642 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:24.972596   12642 cli_runner.go:164] Run: docker volume create addons-821781 --label name.minikube.sigs.k8s.io=addons-821781 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:24.991422   12642 oci.go:103] Successfully created a docker volume addons-821781
	I0916 10:23:24.991492   12642 cli_runner.go:164] Run: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:29.942508   12642 cli_runner.go:217] Completed: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.950978249s)
	I0916 10:23:29.942530   12642 oci.go:107] Successfully prepared a docker volume addons-821781
	I0916 10:23:29.942541   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:29.942558   12642 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:29.942601   12642 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:34.358289   12642 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.415644078s)
	I0916 10:23:34.358318   12642 kic.go:203] duration metric: took 4.415757339s to extract preloaded images to volume ...
	W0916 10:23:34.358449   12642 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:34.358539   12642 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:34.407126   12642 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-821781 --name addons-821781 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-821781 --network addons-821781 --ip 192.168.49.2 --volume addons-821781:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:34.740907   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Running}}
	I0916 10:23:34.761456   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:34.779743   12642 cli_runner.go:164] Run: docker exec addons-821781 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:34.825817   12642 oci.go:144] the created container "addons-821781" has a running status.
	I0916 10:23:34.825843   12642 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa...
	I0916 10:23:35.044132   12642 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:35.071224   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.090107   12642 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:35.090127   12642 kic_runner.go:114] Args: [docker exec --privileged addons-821781 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:35.145473   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.163175   12642 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:35.163257   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.181284   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.181510   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.181525   12642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:35.376812   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.376844   12642 ubuntu.go:169] provisioning hostname "addons-821781"
	I0916 10:23:35.376907   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.394400   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.394569   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.394582   12642 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-821781 && echo "addons-821781" | sudo tee /etc/hostname
	I0916 10:23:35.535760   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.535841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.554208   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.554394   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.554410   12642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-821781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-821781/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-821781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:35.685491   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:35.685520   12642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:23:35.685538   12642 ubuntu.go:177] setting up certificates
	I0916 10:23:35.685549   12642 provision.go:84] configureAuth start
	I0916 10:23:35.685599   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:35.701932   12642 provision.go:143] copyHostCerts
	I0916 10:23:35.702012   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:23:35.702151   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:23:35.702230   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:23:35.702295   12642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.addons-821781 san=[127.0.0.1 192.168.49.2 addons-821781 localhost minikube]
	I0916 10:23:35.783034   12642 provision.go:177] copyRemoteCerts
	I0916 10:23:35.783097   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:35.783127   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.800161   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:35.893913   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:23:35.915296   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:23:35.937405   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:35.959050   12642 provision.go:87] duration metric: took 273.490922ms to configureAuth
	I0916 10:23:35.959082   12642 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:35.959246   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:35.959337   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.977055   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.977247   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.977264   12642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:23:36.194829   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:23:36.194851   12642 machine.go:96] duration metric: took 1.031655385s to provisionDockerMachine
	I0916 10:23:36.194860   12642 client.go:171] duration metric: took 11.723797841s to LocalClient.Create
	I0916 10:23:36.194875   12642 start.go:167] duration metric: took 11.723845183s to libmachine.API.Create "addons-821781"
	I0916 10:23:36.194883   12642 start.go:293] postStartSetup for "addons-821781" (driver="docker")
	I0916 10:23:36.194895   12642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:36.194953   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:36.194987   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.212136   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.306296   12642 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:36.309608   12642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:36.309638   12642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:36.309646   12642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:36.309652   12642 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:36.309662   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:23:36.309721   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:23:36.309744   12642 start.go:296] duration metric: took 114.855265ms for postStartSetup
	I0916 10:23:36.310017   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.326531   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:36.326849   12642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:36.326901   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.343127   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.434151   12642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:36.438063   12642 start.go:128] duration metric: took 11.969538805s to createHost
	I0916 10:23:36.438087   12642 start.go:83] releasing machines lock for "addons-821781", held for 11.96968194s
	I0916 10:23:36.438170   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.454099   12642 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:36.454144   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.454204   12642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:36.454276   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.472027   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.473599   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.640610   12642 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:36.644626   12642 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:23:36.780722   12642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:36.785109   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.802933   12642 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:36.803016   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.830084   12642 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:36.830106   12642 start.go:495] detecting cgroup driver to use...
	I0916 10:23:36.830135   12642 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:36.830178   12642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:23:36.843678   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:23:36.854207   12642 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:36.854255   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:36.867323   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:36.880430   12642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:36.955777   12642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:37.035979   12642 docker.go:233] disabling docker service ...
	I0916 10:23:37.036049   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:37.052780   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:37.063200   12642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:37.138165   12642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:37.215004   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:37.225051   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:37.239114   12642 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:23:37.239176   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.248375   12642 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:23:37.248431   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.257180   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.265957   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.274955   12642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:37.283271   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.291833   12642 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.305478   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.314242   12642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:37.321530   12642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:37.328860   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.397743   12642 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:23:37.494696   12642 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:23:37.494784   12642 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:23:37.498069   12642 start.go:563] Will wait 60s for crictl version
	I0916 10:23:37.498121   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:23:37.501763   12642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:37.533845   12642 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:23:37.533971   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.568210   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.602768   12642 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:23:37.604266   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:37.620164   12642 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:37.623594   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.633351   12642 kubeadm.go:883] updating cluster {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:37.633481   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:37.633537   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.691488   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.691513   12642 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:23:37.691557   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.721834   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.721855   12642 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:37.721863   12642 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:23:37.721943   12642 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-821781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:37.722004   12642 ssh_runner.go:195] Run: crio config
	I0916 10:23:37.761799   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:37.761826   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:37.761837   12642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:37.761858   12642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-821781 NodeName:addons-821781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:37.761998   12642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-821781"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:37.762053   12642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:37.770243   12642 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:37.770305   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:37.778774   12642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:23:37.794482   12642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:37.810783   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0916 10:23:37.827097   12642 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:37.830351   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.840395   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.914798   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:37.926573   12642 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781 for IP: 192.168.49.2
	I0916 10:23:37.926602   12642 certs.go:194] generating shared ca certs ...
	I0916 10:23:37.926624   12642 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:37.926767   12642 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:23:38.165524   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt ...
	I0916 10:23:38.165552   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt: {Name:mk958b9d7b4e596cca12a43812b033701a1808ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165715   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key ...
	I0916 10:23:38.165727   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key: {Name:mk218c15b5e68b365653a5a88f283b4fd2a63397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165796   12642 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:23:38.317748   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt ...
	I0916 10:23:38.317782   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt: {Name:mke289e24f4d60c196cc49c14787f9db71cc62b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.317972   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key ...
	I0916 10:23:38.317984   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key: {Name:mk238a3132478eab5de811cbc3626e41ad1154f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.318059   12642 certs.go:256] generating profile certs ...
	I0916 10:23:38.318110   12642 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key
	I0916 10:23:38.318136   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt with IP's: []
	I0916 10:23:38.579861   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt ...
	I0916 10:23:38.579894   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: {Name:mk21e84efd5822ab69a95d39a845706a794c0061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580087   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key ...
	I0916 10:23:38.580102   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key: {Name:mkafbaeecfaf57db916f1469c60f36a7c0603c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580202   12642 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e
	I0916 10:23:38.580226   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:38.661523   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e ...
	I0916 10:23:38.661551   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e: {Name:mk3603fd200d1d0c9c664f1f9e2d3f37d0da819e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661721   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e ...
	I0916 10:23:38.661734   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e: {Name:mk979e39754dc7623208af4e4f8346a3268b5e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661802   12642 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt
	I0916 10:23:38.661872   12642 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key
	I0916 10:23:38.661916   12642 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key
	I0916 10:23:38.661934   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt with IP's: []
	I0916 10:23:38.868848   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt ...
	I0916 10:23:38.868882   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt: {Name:mk60143e6be001872095f4a07cc8800f3883cb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869061   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key ...
	I0916 10:23:38.869072   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key: {Name:mkfcb902307b78d6d49e6123539922887bdc7bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869254   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:38.869291   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:23:38.869321   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:38.869365   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:38.869947   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:38.891875   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:38.913044   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:38.935301   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:38.957638   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:38.978769   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:38.999283   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:39.020509   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:39.041006   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:39.062022   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:39.077689   12642 ssh_runner.go:195] Run: openssl version
	I0916 10:23:39.082828   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:39.091794   12642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094851   12642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094909   12642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.101357   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:39.110237   12642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:39.113275   12642 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:39.113343   12642 kubeadm.go:392] StartCluster: {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:39.113424   12642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:39.113461   12642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:39.147213   12642 cri.go:89] found id: ""
	I0916 10:23:39.147277   12642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:39.155102   12642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:39.162655   12642 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:39.162713   12642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:39.170269   12642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:39.170287   12642 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:39.170331   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:39.177944   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:39.178006   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:39.185617   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:39.193448   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:39.193494   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:39.201778   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.209504   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:39.209560   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.217167   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:39.224794   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:39.224851   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:39.232091   12642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:39.267943   12642 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:39.268041   12642 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:39.285854   12642 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:39.285924   12642 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:39.285968   12642 kubeadm.go:310] OS: Linux
	I0916 10:23:39.286011   12642 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:39.286080   12642 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:39.286143   12642 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:39.286205   12642 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:39.286307   12642 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:39.286389   12642 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:39.286430   12642 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:39.286498   12642 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:39.286566   12642 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:39.334020   12642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:39.334137   12642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:39.334277   12642 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:39.339811   12642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:39.342965   12642 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:39.343081   12642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:39.343174   12642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:39.501471   12642 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:39.656891   12642 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:39.803369   12642 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:39.956554   12642 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:40.122217   12642 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:40.122346   12642 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.178788   12642 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:40.178946   12642 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.253274   12642 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:40.444072   12642 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:40.539814   12642 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:40.539908   12642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:40.740107   12642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:40.805609   12642 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:41.114974   12642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:41.183175   12642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:41.287722   12642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:41.288131   12642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:41.290675   12642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:41.293432   12642 out.go:235]   - Booting up control plane ...
	I0916 10:23:41.293554   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:41.293636   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:41.293726   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:41.302536   12642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:41.307914   12642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:41.307975   12642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:41.387469   12642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:41.387659   12642 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:41.889098   12642 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.704632ms
	I0916 10:23:41.889216   12642 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:46.391264   12642 kubeadm.go:310] [api-check] The API server is healthy after 4.502175176s
	I0916 10:23:46.402989   12642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:46.412298   12642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:46.429664   12642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:46.429953   12642 kubeadm.go:310] [mark-control-plane] Marking the node addons-821781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:46.439045   12642 kubeadm.go:310] [bootstrap-token] Using token: 08e8kf.82j5psgo1mt86ygt
	I0916 10:23:46.440988   12642 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:46.441118   12642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:46.443591   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:46.448741   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:46.451033   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:46.453482   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:46.457052   12642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:46.798062   12642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:47.220263   12642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:47.797780   12642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:47.798623   12642 kubeadm.go:310] 
	I0916 10:23:47.798710   12642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:47.798722   12642 kubeadm.go:310] 
	I0916 10:23:47.798838   12642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:47.798858   12642 kubeadm.go:310] 
	I0916 10:23:47.798897   12642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:47.798955   12642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:47.799030   12642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:47.799050   12642 kubeadm.go:310] 
	I0916 10:23:47.799117   12642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:47.799125   12642 kubeadm.go:310] 
	I0916 10:23:47.799191   12642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:47.799202   12642 kubeadm.go:310] 
	I0916 10:23:47.799273   12642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:47.799371   12642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:47.799433   12642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:47.799458   12642 kubeadm.go:310] 
	I0916 10:23:47.799618   12642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:47.799702   12642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:47.799727   12642 kubeadm.go:310] 
	I0916 10:23:47.799855   12642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800005   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:23:47.800028   12642 kubeadm.go:310] 	--control-plane 
	I0916 10:23:47.800034   12642 kubeadm.go:310] 
	I0916 10:23:47.800137   12642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:47.800147   12642 kubeadm.go:310] 
	I0916 10:23:47.800244   12642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800384   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:23:47.802505   12642 kubeadm.go:310] W0916 10:23:39.265300    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.802965   12642 kubeadm.go:310] W0916 10:23:39.265967    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.803297   12642 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:47.803488   12642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:47.803508   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:47.803517   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:47.805594   12642 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:47.806930   12642 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:47.811723   12642 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:47.811744   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:47.829314   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:48.045373   12642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:48.045433   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.045434   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-821781 minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-821781 minikube.k8s.io/primary=true
	I0916 10:23:48.053143   12642 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:48.121750   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.622580   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.121829   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.622144   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.122640   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.622473   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.122549   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.622693   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.122279   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.622129   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.815735   12642 kubeadm.go:1113] duration metric: took 4.770357411s to wait for elevateKubeSystemPrivileges
	I0916 10:23:52.815769   12642 kubeadm.go:394] duration metric: took 13.702442151s to StartCluster
	I0916 10:23:52.815790   12642 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.815914   12642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:52.816324   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.816539   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:52.816545   12642 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:52.816616   12642 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:52.816735   12642 addons.go:69] Setting yakd=true in profile "addons-821781"
	I0916 10:23:52.816749   12642 addons.go:69] Setting ingress-dns=true in profile "addons-821781"
	I0916 10:23:52.816756   12642 addons.go:69] Setting default-storageclass=true in profile "addons-821781"
	I0916 10:23:52.816766   12642 addons.go:69] Setting inspektor-gadget=true in profile "addons-821781"
	I0916 10:23:52.816771   12642 addons.go:234] Setting addon ingress-dns=true in "addons-821781"
	I0916 10:23:52.816777   12642 addons.go:234] Setting addon inspektor-gadget=true in "addons-821781"
	I0916 10:23:52.816781   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.816788   12642 addons.go:69] Setting cloud-spanner=true in profile "addons-821781"
	I0916 10:23:52.816798   12642 addons.go:234] Setting addon cloud-spanner=true in "addons-821781"
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816821   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816815   12642 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-821781"
	I0916 10:23:52.816831   12642 addons.go:69] Setting volumesnapshots=true in profile "addons-821781"
	I0916 10:23:52.816846   12642 addons.go:234] Setting addon volumesnapshots=true in "addons-821781"
	I0916 10:23:52.816852   12642 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-821781"
	I0916 10:23:52.816859   12642 addons.go:69] Setting gcp-auth=true in profile "addons-821781"
	I0916 10:23:52.816864   12642 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-821781"
	I0916 10:23:52.816869   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816875   12642 mustload.go:65] Loading cluster: addons-821781
	I0916 10:23:52.816879   12642 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:52.816885   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816897   12642 addons.go:69] Setting ingress=true in profile "addons-821781"
	I0916 10:23:52.816908   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816914   12642 addons.go:234] Setting addon ingress=true in "addons-821781"
	I0916 10:23:52.816821   12642 addons.go:69] Setting storage-provisioner=true in profile "addons-821781"
	I0916 10:23:52.816951   12642 addons.go:234] Setting addon storage-provisioner=true in "addons-821781"
	I0916 10:23:52.816952   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816967   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816991   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.817237   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817375   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816847   12642 addons.go:69] Setting helm-tiller=true in profile "addons-821781"
	I0916 10:23:52.817387   12642 addons.go:69] Setting registry=true in profile "addons-821781"
	I0916 10:23:52.817393   12642 addons.go:234] Setting addon helm-tiller=true in "addons-821781"
	I0916 10:23:52.817398   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817399   12642 addons.go:234] Setting addon registry=true in "addons-821781"
	I0916 10:23:52.817413   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817421   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817453   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817460   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817835   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817839   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.818548   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816758   12642 addons.go:234] Setting addon yakd=true in "addons-821781"
	I0916 10:23:52.818812   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816831   12642 addons.go:69] Setting metrics-server=true in profile "addons-821781"
	I0916 10:23:52.819624   12642 addons.go:234] Setting addon metrics-server=true in "addons-821781"
	I0916 10:23:52.819661   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816777   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-821781"
	I0916 10:23:52.820048   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820121   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820925   12642 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:52.817377   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.823819   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:52.819369   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817378   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816830   12642 addons.go:69] Setting volcano=true in profile "addons-821781"
	I0916 10:23:52.827260   12642 addons.go:234] Setting addon volcano=true in "addons-821781"
	I0916 10:23:52.827341   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.827903   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816822   12642 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-821781"
	I0916 10:23:52.828667   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-821781"
	I0916 10:23:52.846468   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.849708   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.849779   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.858180   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:52.860117   12642 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:52.861491   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:52.861515   12642 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:52.861580   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.861792   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:52.863536   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:52.865265   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:52.868592   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:52.871812   12642 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:52.873467   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:52.873491   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:52.873553   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.873826   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:52.875500   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:52.876891   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:52.878274   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:52.878295   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:52.878358   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.885380   12642 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:52.887180   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:52.887200   12642 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:52.887253   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.887590   12642 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:52.889278   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:52.889293   12642 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:52.891126   12642 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:52.891146   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:52.891207   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.891375   12642 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:52.893052   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.893213   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:52.893225   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:52.893284   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.895906   12642 addons.go:234] Setting addon default-storageclass=true in "addons-821781"
	I0916 10:23:52.895950   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.896395   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.902602   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.904755   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:52.904779   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:52.904841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.913208   12642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:52.916490   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:52.916516   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:52.916578   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.920102   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.921373   12642 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:52.924287   12642 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:52.924310   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:52.924367   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.924567   12642 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:52.924966   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.927248   12642 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:52.927271   12642 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:52.927324   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	W0916 10:23:52.939182   12642 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:23:52.945562   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.947311   12642 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:52.949640   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:52.949813   12642 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:52.949828   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:52.949883   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.950915   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:52.950951   12642 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:52.951010   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.967061   12642 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-821781"
	I0916 10:23:52.967112   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.967600   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.976558   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.977128   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979407   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979587   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979666   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982295   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982301   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984209   12642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:52.984228   12642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:52.984267   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984282   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.985867   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.992433   12642 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:52.996036   12642 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:52.998876   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:52.998899   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:52.998966   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:53.007398   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.031542   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.198285   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:53.222232   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:53.223607   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:53.303303   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:53.303391   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:53.412003   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:53.494460   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:53.495317   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:53.495388   12642 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:53.500279   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:53.500366   12642 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:53.518431   12642 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:53.518460   12642 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:53.595357   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:53.595389   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:53.595502   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:53.595520   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:53.601235   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:53.601265   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:53.603514   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:53.610819   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:53.613851   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:53.696891   12642 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:53.696920   12642 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:53.697186   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:53.711949   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:53.711981   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:53.793955   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:53.794047   12642 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:53.795627   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:53.795652   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:53.810579   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:53.810623   12642 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:53.818121   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:53.818143   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:54.008884   12642 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:54.008915   12642 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:54.097416   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:54.097502   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:54.105048   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:54.114541   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:54.116113   12642 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.116175   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:54.194093   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:54.194181   12642 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:54.310015   12642 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:54.310107   12642 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:54.315950   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:54.316029   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:54.409828   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.595664   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:54.595750   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:54.795049   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:54.795131   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:54.795986   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:54.796042   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:54.798857   12642 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.60047423s)
	I0916 10:23:54.798970   12642 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:54.798946   12642 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.576635993s)
	I0916 10:23:54.799977   12642 node_ready.go:35] waiting up to 6m0s for node "addons-821781" to be "Ready" ...
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:54.816489   12642 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:54.816544   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:55.096307   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:55.096398   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:55.098163   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:55.303720   12642 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:55.303802   12642 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:55.310866   12642 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:55.310939   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:55.509740   12642 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-821781" context rescaled to 1 replicas
	I0916 10:23:55.603909   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:55.603992   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:55.609116   12642 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:55.609197   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:55.701381   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:56.095470   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:56.095499   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:56.106357   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:56.115945   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.892303376s)
	I0916 10:23:56.209795   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:56.209873   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:56.410426   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:56.410515   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:56.511332   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.511408   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:56.813818   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.895029   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:58.497986   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.085861545s)
	I0916 10:23:58.498185   12642 addons.go:475] Verifying addon ingress=true in "addons-821781"
	I0916 10:23:58.498214   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.894594589s)
	I0916 10:23:58.498365   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.801136889s)
	I0916 10:23:58.498429   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.393306067s)
	I0916 10:23:58.498499   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.383877389s)
	I0916 10:23:58.498516   12642 addons.go:475] Verifying addon metrics-server=true in "addons-821781"
	I0916 10:23:58.498551   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.08869279s)
	I0916 10:23:58.498561   12642 addons.go:475] Verifying addon registry=true in "addons-821781"
	I0916 10:23:58.498687   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.40044143s)
	I0916 10:23:58.498148   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.003579441s)
	I0916 10:23:58.498265   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.887343223s)
	I0916 10:23:58.498721   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.884394452s)
	I0916 10:23:58.500166   12642 out.go:177] * Verifying registry addon...
	I0916 10:23:58.500186   12642 out.go:177] * Verifying ingress addon...
	I0916 10:23:58.500168   12642 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-821781 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:58.502840   12642 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:23:58.502984   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0916 10:23:58.505976   12642 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:23:58.508066   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:58.508081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:58.508299   12642 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:23:58.508315   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.012329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.110843   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.299182   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.597694462s)
	W0916 10:23:59.299228   12642 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299250   12642 retry.go:31] will retry after 144.288551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299277   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.19282086s)
	I0916 10:23:59.305158   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:59.444238   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.506924   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.507806   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.539307   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.725399907s)
	I0916 10:23:59.539335   12642 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:59.541718   12642 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:59.543660   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:59.597366   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:59.597452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.006951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.007539   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.096393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.099134   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:00.099205   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.125424   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.418412   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:00.508361   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.509838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.518754   12642 addons.go:234] Setting addon gcp-auth=true in "addons-821781"
	I0916 10:24:00.518809   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:24:00.519365   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:24:00.536851   12642 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:00.536902   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.553493   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.596428   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.006170   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.006803   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.047121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.506287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.506534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.547185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.805560   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:02.007448   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.008038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.046600   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.202834   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.758545356s)
	I0916 10:24:02.202854   12642 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.665973141s)
	I0916 10:24:02.205053   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:02.206664   12642 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:02.208283   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:02.208296   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:02.226305   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:02.226333   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:02.244167   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.244187   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:02.298853   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.506489   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.506968   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.547297   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.899621   12642 addons.go:475] Verifying addon gcp-auth=true in "addons-821781"
	I0916 10:24:02.901591   12642 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:02.904224   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:02.907029   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:02.907051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.007207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.007880   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.047134   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.407111   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.506509   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.507075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.547522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.907027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.007265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.007643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.046594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.303245   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:04.407879   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.506365   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.506939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.547412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.907817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.006397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.007232   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.047038   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.407918   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.506892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.507154   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.547266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.907671   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.006358   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.006625   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.046717   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.407766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.506364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.506750   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.547000   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.803631   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:06.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.006037   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.006551   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.046971   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.407314   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.506338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.547256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.907021   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.005785   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.006334   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.046439   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.408357   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.505952   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.506643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.547247   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.803661   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:08.907343   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.006189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.046966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.407657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.506182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.506608   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.546942   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.907283   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.005977   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.006337   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.046685   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.408104   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.507241   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.547393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.907115   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.005778   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.006115   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.047296   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.302797   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:11.407398   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.506075   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.506794   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.546885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.907330   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.006053   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.046997   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.407912   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.506528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.507006   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.547228   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.907413   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.006062   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.006437   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.303472   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:13.407845   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.506423   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.547162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.907106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.005737   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.006410   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.047326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.407189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.505915   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.506316   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.547399   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.907535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.007080   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.046972   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.407693   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.506219   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.506709   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.547052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.803455   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:15.907823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.006647   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.007106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.047456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.407960   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.506331   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.547157   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.907551   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.006299   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.006617   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.047040   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.406899   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.506449   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.506938   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.547210   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.907861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.006488   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.006990   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.046795   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.303390   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:18.408194   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.505660   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.506075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.547467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.908947   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.006658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.007120   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.047574   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.407694   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.506237   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.506764   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.546743   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.907775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.006250   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.006926   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.046950   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.407914   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.506444   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.506893   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.547165   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.802891   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:20.908266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.006168   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.006661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.046763   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.407620   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.506280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.506758   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.547207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.907808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.006390   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.006832   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.047258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.407294   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.506192   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.506573   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.546892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.803612   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:22.907631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.006412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.006789   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.407703   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.506242   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.506922   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.546531   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.907989   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.006557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.007064   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.047256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.407245   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.506027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.506326   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.546265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.907143   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.006149   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.006574   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.303085   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:25.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.506502   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.506958   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.549041   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.907130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.005689   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.006094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.047573   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.407949   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.506465   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.506873   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.547130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.907930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.006498   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.006899   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.047132   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.303541   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:27.407076   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.505560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.506083   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.547418   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.907322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.006007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.006289   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.046769   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.506106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.506493   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.547121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.907052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.005692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.006125   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.047636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.407566   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.506440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.506780   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.547158   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.802646   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:29.907185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.005875   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.006320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.046391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.407344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.505998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.506431   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.546833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.006755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.007344   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.047565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.407650   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.506485   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.506906   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.547281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.803334   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:31.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.006411   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.006716   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.047171   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.407108   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.505792   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.506357   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.547493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.907787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.007161   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.047511   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.407346   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.506125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.506509   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.547645   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.803187   12642 node_ready.go:49] node "addons-821781" has status "Ready":"True"
	I0916 10:24:33.803213   12642 node_ready.go:38] duration metric: took 39.003174602s for node "addons-821781" to be "Ready" ...
	I0916 10:24:33.803225   12642 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:33.970599   12642 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:34.069001   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.088106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.088355   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:34.088380   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.088736   12642 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:34.088757   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.407852   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.508926   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.509671   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.609806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.907890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.006456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.006807   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.047745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.407857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.476382   12642 pod_ready.go:93] pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.476406   12642 pod_ready.go:82] duration metric: took 1.50577246s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.476429   12642 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480336   12642 pod_ready.go:93] pod "etcd-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.480359   12642 pod_ready.go:82] duration metric: took 3.921757ms for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480374   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484379   12642 pod_ready.go:93] pod "kube-apiserver-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.484399   12642 pod_ready.go:82] duration metric: took 4.01835ms for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484407   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488483   12642 pod_ready.go:93] pod "kube-controller-manager-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.488502   12642 pod_ready.go:82] duration metric: took 4.089026ms for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488513   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492259   12642 pod_ready.go:93] pod "kube-proxy-7grrw" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.492277   12642 pod_ready.go:82] duration metric: took 3.758267ms for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492286   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.508978   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.509276   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.548257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.875363   12642 pod_ready.go:93] pod "kube-scheduler-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.875387   12642 pod_ready.go:82] duration metric: took 383.093988ms for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.875399   12642 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.907718   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.006857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.047708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.407759   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.506231   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.506532   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.547623   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.908178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.009196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.009613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.111822   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.408212   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.507815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.508955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.597930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.899332   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.907966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.007593   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.007941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.096688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.407803   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.507008   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.507185   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.548820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.912820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.007788   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.007812   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.048263   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.506945   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.507715   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.548866   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.908787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.007032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.007632   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.048796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.398719   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:40.407487   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.507397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.507772   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.548227   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.908344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.009557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.009817   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.048882   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.407443   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.507386   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.507614   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.547783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.907344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.006438   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.006755   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.047817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.407604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.506506   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.506862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.548258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.880576   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:42.907125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.006570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.006955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.048271   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.407864   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.507257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.507492   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.548688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.907268   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.006139   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.006358   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.048808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.408058   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.506983   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.507322   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.548244   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.907777   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.007224   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.007575   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.048360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.381456   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:45.408061   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.507492   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.507642   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.548176   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.907279   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.006236   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.407829   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.507175   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.507613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.549215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.908356   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.007293   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.007559   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.098016   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.398953   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:47.408142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.507848   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.508575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.597783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.907504   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.006545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.047872   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.408467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.506796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.507040   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.548302   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.907911   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.007377   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.007799   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.048150   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.407649   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.506584   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.507145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.548392   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.881772   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:49.907684   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.006877   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.007616   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.048576   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.408384   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.509092   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.509234   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.548191   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.907565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.008280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.008548   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.048447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.407510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.506404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.547570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.900427   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:51.908013   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.008311   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.009178   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.098159   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.407616   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.506895   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.507402   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.548326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.907362   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.008415   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.009033   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.110477   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.408669   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.508937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.509320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.548259   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.907440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.006459   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.047766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.381253   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:54.408025   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.506984   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.548500   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.907545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.007055   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.007267   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.048307   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.407381   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.506329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.506924   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.547861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.907031   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.007920   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.048290   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.407755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.508288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.508534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.547447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.880835   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:56.907604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.008980   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.009246   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.048404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.408337   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.506591   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.506714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.547844   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.907931   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.007018   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.007364   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.048745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.407890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.506768   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.507350   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.548030   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.883327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:58.908144   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.008937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.010047   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.048751   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.407088   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.507067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.507939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.597408   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.907493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.006520   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.006934   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.407658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.507503   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.548304   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.908137   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.007637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.007838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.048049   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.381960   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:01.407780   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.506951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.507128   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.549865   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.908484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.009640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.009714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.047344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.407125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.506639   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.547791   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.908024   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.007189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.007861   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.048215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.408697   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.509655   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.509879   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.547998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.881604   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:03.907142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.006400   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.006547   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.047579   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.407594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.509746   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.510002   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.547819   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.907345   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.006657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.006921   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.048328   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.407535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.506637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.506876   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.548360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.881794   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:05.907547   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.006578   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.007101   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.047920   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.408051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.506012   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.506238   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.548610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.006786   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.007057   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.048484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.407806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.506692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.506986   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.548007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.907772   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.006701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.006970   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.047834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.394559   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:08.408017   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.507156   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.507728   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.597758   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.907919   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.007661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.098454   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.408318   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.509364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.510773   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.598483   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.908201   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.008441   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.009850   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.102292   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.398327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:10.408466   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.507500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.507925   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.548323   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.907708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.006815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.008091   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.047722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.407736   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.507196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.507427   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.599680   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.907752   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.007430   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.007699   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.047776   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.407516   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.506452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.506628   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.550195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.880927   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:12.907727   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.007178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.007457   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.407946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.507322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.507501   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.547784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.908011   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.007871   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.008085   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.049162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.407342   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.506366   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.507489   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.597388   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.881914   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:14.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.007276   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.008484   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.097577   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.407927   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.507867   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.508145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.548701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.909823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.012269   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.012490   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.112080   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.407823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.506640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.507038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.547677   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.908338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.006229   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.006500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.047433   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.380841   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:17.408141   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.507281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.507422   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.548306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.908216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.005946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.006253   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.048471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.407630   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.506857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.507586   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.547722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.908142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.007287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.007657   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.048873   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.399218   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:19.408522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.506838   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.506974   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.548754   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.907508   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.006666   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.007738   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.096885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.407683   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.507079   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.507594   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.549277   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.938821   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.007125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.007361   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.049052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.408461   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.506721   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.507045   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.548148   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.881149   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:21.907701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.007091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.007530   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.108828   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.408067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.507251   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.507505   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.549744   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.908512   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.006557   12642 kapi.go:107] duration metric: took 1m24.503572468s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:23.007211   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.050575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.408216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.507222   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.548029   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.881704   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:23.907636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.006951   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.048091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.407560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.506856   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.548705   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.907750   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.006941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.048097   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.408473   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.507086   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.548651   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.907834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.007469   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.415775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.417875   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:26.507746   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.549493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.908404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.009635   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.048391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.408105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.509068   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.548222   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.908042   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.007883   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.047932   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.408370   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.507379   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.548467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.898654   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:28.907039   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.007310   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.048105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.407790   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.507440   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.598195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.907810   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.007961   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.407748   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.548456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.908206   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.007623   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.048306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.380691   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:31.407719   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.506896   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.547878   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.907840   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.007212   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.048133   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.407238   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.506798   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.548528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.907455   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.006747   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.047570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.381514   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:33.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.506478   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.548374   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.907944   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.007347   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.048784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.408200   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.506244   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.548189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.907539   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.006862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.049282   12642 kapi.go:107] duration metric: took 1m35.505619997s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:25:35.407599   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.881121   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:35.907998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.007303   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.407476   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.506940   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.006647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.408081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.507464   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.908184   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.007201   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.381474   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:38.407986   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.508647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.908946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.008435   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.408471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.510473   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.995610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.008869   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.397632   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:40.408032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.509659   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.907933   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.007031   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.408056   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.508041   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.908287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.006885   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.407440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.880849   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:42.907379   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.008348   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.408661   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.907189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.006692   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.407965   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.507074   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.908416   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.006411   12642 kapi.go:107] duration metric: took 1m46.503572843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:45.381179   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:45.459019   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.907457   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.408510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.907182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.396594   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:47.407631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.908030   12642 kapi.go:107] duration metric: took 1m45.003803312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:47.909696   12642 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-821781 cluster.
	I0916 10:25:47.911374   12642 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:47.913470   12642 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:47.915138   12642 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, helm-tiller, metrics-server, storage-provisioner, cloud-spanner, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:47.916678   12642 addons.go:510] duration metric: took 1m55.100061322s for enable addons: enabled=[ingress-dns nvidia-device-plugin helm-tiller metrics-server storage-provisioner cloud-spanner yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:49.881225   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:52.381442   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:54.380287   12642 pod_ready.go:93] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.380308   12642 pod_ready.go:82] duration metric: took 1m18.504902601s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.380318   12642 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384430   12642 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.384450   12642 pod_ready.go:82] duration metric: took 4.126025ms for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384468   12642 pod_ready.go:39] duration metric: took 1m20.581229133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:25:54.384485   12642 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:25:54.384513   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:54.384564   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:54.417384   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.417411   12642 cri.go:89] found id: ""
	I0916 10:25:54.417421   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:54.417476   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.420785   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:54.420839   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:54.452868   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.452890   12642 cri.go:89] found id: ""
	I0916 10:25:54.452898   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:54.452950   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.456066   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:54.456119   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:54.487907   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:54.487930   12642 cri.go:89] found id: ""
	I0916 10:25:54.487938   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:54.487992   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.491215   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:54.491266   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:54.523745   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.523766   12642 cri.go:89] found id: ""
	I0916 10:25:54.523775   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:54.523831   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.527161   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:54.527229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:54.560095   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.560123   12642 cri.go:89] found id: ""
	I0916 10:25:54.560133   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:54.560180   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.563529   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:54.563589   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:54.596576   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:54.596600   12642 cri.go:89] found id: ""
	I0916 10:25:54.596608   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:54.596655   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.599825   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:54.599906   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:54.632507   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:54.632531   12642 cri.go:89] found id: ""
	I0916 10:25:54.632539   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:54.632620   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.635882   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:54.635906   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:54.698451   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:54.698492   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:54.799766   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:54.799797   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.843933   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:54.843963   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.894142   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:54.894174   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.934257   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:54.934288   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.967135   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:54.967163   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:55.001104   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:55.001133   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:55.013631   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:55.013663   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:55.047469   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:55.047499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:55.106750   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:55.106787   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:55.182277   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:55.182324   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:57.726595   12642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:25:57.740119   12642 api_server.go:72] duration metric: took 2m4.923540882s to wait for apiserver process to appear ...
	I0916 10:25:57.740152   12642 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:25:57.740187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:57.740229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:57.772533   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:57.772558   12642 cri.go:89] found id: ""
	I0916 10:25:57.772566   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:57.772615   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.775778   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:57.775838   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:57.813245   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:57.813271   12642 cri.go:89] found id: ""
	I0916 10:25:57.813281   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:57.813354   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.817691   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:57.817769   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:57.851306   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:57.851328   12642 cri.go:89] found id: ""
	I0916 10:25:57.851335   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:57.851378   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.854640   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:57.854706   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:57.904175   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:57.904198   12642 cri.go:89] found id: ""
	I0916 10:25:57.904205   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:57.904252   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.907938   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:57.907996   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:57.941402   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:57.941421   12642 cri.go:89] found id: ""
	I0916 10:25:57.941428   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:57.941481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.944741   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:57.944796   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:57.979020   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:57.979042   12642 cri.go:89] found id: ""
	I0916 10:25:57.979051   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:57.979108   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.982381   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:57.982431   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:58.014858   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:58.014881   12642 cri.go:89] found id: ""
	I0916 10:25:58.014890   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:58.014937   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:58.018251   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:58.018272   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:58.050812   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:58.050847   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:58.108286   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:58.108318   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:58.182964   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:58.183002   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:58.248089   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:58.248126   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:58.260293   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:58.260339   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:58.355509   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:58.355535   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:58.398314   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:58.398350   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:58.445703   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:58.445736   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:58.485997   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:58.486025   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:58.519971   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:58.519998   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:58.558470   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:58.558499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.092930   12642 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:26:01.096706   12642 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:26:01.097615   12642 api_server.go:141] control plane version: v1.31.1
	I0916 10:26:01.097635   12642 api_server.go:131] duration metric: took 3.357476241s to wait for apiserver health ...
	I0916 10:26:01.097642   12642 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:26:01.097662   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:26:01.097709   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:26:01.131450   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.131477   12642 cri.go:89] found id: ""
	I0916 10:26:01.131489   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:26:01.131542   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.134752   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:26:01.134813   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:26:01.166978   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.167002   12642 cri.go:89] found id: ""
	I0916 10:26:01.167014   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:26:01.167057   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.170770   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:26:01.170821   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:26:01.203544   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.203564   12642 cri.go:89] found id: ""
	I0916 10:26:01.203571   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:26:01.203632   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.207027   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:26:01.207101   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:26:01.240766   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.240787   12642 cri.go:89] found id: ""
	I0916 10:26:01.240795   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:26:01.240847   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.244187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:26:01.244242   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:26:01.278657   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.278686   12642 cri.go:89] found id: ""
	I0916 10:26:01.278696   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:26:01.278754   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.282264   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:26:01.282333   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:26:01.316408   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.316431   12642 cri.go:89] found id: ""
	I0916 10:26:01.316439   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:26:01.316481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.319848   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:26:01.319913   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:26:01.352617   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.352637   12642 cri.go:89] found id: ""
	I0916 10:26:01.352645   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:26:01.352692   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.356052   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:26:01.356078   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:26:01.430171   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:26:01.430203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:26:01.471970   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:26:01.472001   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.512405   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:26:01.512437   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.545482   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:26:01.545511   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:26:01.657458   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:26:01.657495   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.703167   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:26:01.703203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.753488   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:26:01.753528   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.788778   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:26:01.788809   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.847216   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:26:01.847252   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.883444   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:26:01.883479   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:26:01.950602   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:26:01.950637   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:26:04.473621   12642 system_pods.go:59] 19 kube-system pods found
	I0916 10:26:04.473667   12642 system_pods.go:61] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.473674   12642 system_pods.go:61] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.473678   12642 system_pods.go:61] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.473681   12642 system_pods.go:61] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.473685   12642 system_pods.go:61] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.473688   12642 system_pods.go:61] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.473692   12642 system_pods.go:61] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.473696   12642 system_pods.go:61] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.473699   12642 system_pods.go:61] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.473702   12642 system_pods.go:61] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.473706   12642 system_pods.go:61] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.473709   12642 system_pods.go:61] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.473712   12642 system_pods.go:61] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.473715   12642 system_pods.go:61] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.473718   12642 system_pods.go:61] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.473722   12642 system_pods.go:61] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.473725   12642 system_pods.go:61] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.473728   12642 system_pods.go:61] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.473731   12642 system_pods.go:61] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.473737   12642 system_pods.go:74] duration metric: took 3.376089349s to wait for pod list to return data ...
	I0916 10:26:04.473747   12642 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:26:04.476243   12642 default_sa.go:45] found service account: "default"
	I0916 10:26:04.476265   12642 default_sa.go:55] duration metric: took 2.512507ms for default service account to be created ...
	I0916 10:26:04.476273   12642 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:26:04.484719   12642 system_pods.go:86] 19 kube-system pods found
	I0916 10:26:04.484756   12642 system_pods.go:89] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.484762   12642 system_pods.go:89] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.484766   12642 system_pods.go:89] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.484770   12642 system_pods.go:89] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.484774   12642 system_pods.go:89] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.484778   12642 system_pods.go:89] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.484782   12642 system_pods.go:89] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.484786   12642 system_pods.go:89] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.484790   12642 system_pods.go:89] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.484796   12642 system_pods.go:89] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.484800   12642 system_pods.go:89] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.484803   12642 system_pods.go:89] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.484807   12642 system_pods.go:89] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.484812   12642 system_pods.go:89] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.484818   12642 system_pods.go:89] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.484822   12642 system_pods.go:89] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.484826   12642 system_pods.go:89] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.484830   12642 system_pods.go:89] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.484834   12642 system_pods.go:89] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.484840   12642 system_pods.go:126] duration metric: took 8.563189ms to wait for k8s-apps to be running ...
	I0916 10:26:04.484851   12642 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:26:04.484897   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:26:04.496212   12642 system_svc.go:56] duration metric: took 11.351945ms WaitForService to wait for kubelet
	I0916 10:26:04.496239   12642 kubeadm.go:582] duration metric: took 2m11.67966753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:26:04.496261   12642 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:26:04.499350   12642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:26:04.499377   12642 node_conditions.go:123] node cpu capacity is 8
	I0916 10:26:04.499389   12642 node_conditions.go:105] duration metric: took 3.122952ms to run NodePressure ...
	I0916 10:26:04.499400   12642 start.go:241] waiting for startup goroutines ...
	I0916 10:26:04.499406   12642 start.go:246] waiting for cluster config update ...
	I0916 10:26:04.499455   12642 start.go:255] writing updated cluster config ...
	I0916 10:26:04.519561   12642 ssh_runner.go:195] Run: rm -f paused
	I0916 10:26:04.665202   12642 out.go:177] * Done! kubectl is now configured to use "addons-821781" cluster and "default" namespace by default
	E0916 10:26:04.666644   12642 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:26:53 addons-821781 crio[1028]: time="2024-09-16 10:26:53.931541768Z" level=info msg="Ran pod sandbox a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a with infra container: headlamp/headlamp-57fb76fcdb-xfkdj/POD" id=3306c507-1e8e-49de-a958-30a1dce1fccb name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:26:53 addons-821781 crio[1028]: time="2024-09-16 10:26:53.932584044Z" level=info msg="Checking image status: ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" id=1299fd8e-9c6d-421e-988a-e47f30508fe6 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:53 addons-821781 crio[1028]: time="2024-09-16 10:26:53.932851603Z" level=info msg="Image ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971 not found" id=1299fd8e-9c6d-421e-988a-e47f30508fe6 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:53 addons-821781 crio[1028]: time="2024-09-16 10:26:53.933674743Z" level=info msg="Pulling image: ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" id=0db83db7-d78d-48a7-94a1-d94c8c846ae9 name=/runtime.v1.ImageService/PullImage
	Sep 16 10:26:53 addons-821781 crio[1028]: time="2024-09-16 10:26:53.938497620Z" level=info msg="Trying to access \"ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971\""
	Sep 16 10:26:54 addons-821781 crio[1028]: time="2024-09-16 10:26:54.386645163Z" level=info msg="Trying to access \"ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971\""
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.763995571Z" level=info msg="Pulled image: ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" id=0db83db7-d78d-48a7-94a1-d94c8c846ae9 name=/runtime.v1.ImageService/PullImage
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.764566725Z" level=info msg="Checking image status: ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" id=17fc676b-99b0-4eb5-9220-af226a298e37 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.765719978Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,RepoTags:[],RepoDigests:[ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971 ghcr.io/headlamp-k8s/headlamp@sha256:c8e183672fcb6f4816fdd2e13c520f7a1946297aa70dd1c46f83bf859c8dd5ec],Size_:187495815,Uid:nil,Username:headlamp,Spec:nil,},Info:map[string]string{},}" id=17fc676b-99b0-4eb5-9220-af226a298e37 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.766446169Z" level=info msg="Checking image status: ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" id=c5de6159-8790-4c29-8058-062f3cd01e72 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.767540236Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,RepoTags:[],RepoDigests:[ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971 ghcr.io/headlamp-k8s/headlamp@sha256:c8e183672fcb6f4816fdd2e13c520f7a1946297aa70dd1c46f83bf859c8dd5ec],Size_:187495815,Uid:nil,Username:headlamp,Spec:nil,},Info:map[string]string{},}" id=c5de6159-8790-4c29-8058-062f3cd01e72 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.768366013Z" level=info msg="Creating container: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=6333ebfa-7f47-4891-81bf-b5e60ab69798 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.768477983Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.819023840Z" level=info msg="Created container 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=6333ebfa-7f47-4891-81bf-b5e60ab69798 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.819618141Z" level=info msg="Starting container: 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557" id=9ece8bd9-e051-4e9c-a08a-174a05cbaebe name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.825780436Z" level=info msg="Started container" PID=8858 containerID=34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 description=headlamp/headlamp-57fb76fcdb-xfkdj/headlamp id=9ece8bd9-e051-4e9c-a08a-174a05cbaebe name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.044153577Z" level=info msg="Stopping container: 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 (timeout: 30s)" id=59988acf-cbf5-4ccc-b391-0d71d7d986dc name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:04 addons-821781 conmon[8845]: conmon 34675749bf60eae87e1a <ninfo>: container 8858 exited with status 2
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.173583792Z" level=info msg="Stopped container 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=59988acf-cbf5-4ccc-b391-0d71d7d986dc name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.174150719Z" level=info msg="Stopping pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=5920ec82-b971-47e8-ab8f-97f10512b921 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.174391947Z" level=info msg="Got pod network &{Name:headlamp-57fb76fcdb-xfkdj Namespace:headlamp ID:a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a UID:cad0d003-8455-4239-998d-1327610acea6 NetNS:/var/run/netns/55d20309-9c81-477c-9b7b-a9b7cabae71c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.174556187Z" level=info msg="Deleting pod headlamp_headlamp-57fb76fcdb-xfkdj from CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.210730567Z" level=info msg="Stopped pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=5920ec82-b971-47e8-ab8f-97f10512b921 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.932887074Z" level=info msg="Removing container: 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557" id=7c971f4c-d380-4cd4-ad5a-169db70dfa55 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.946676009Z" level=info msg="Removed container 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=7c971f4c-d380-4cd4-ad5a-169db70dfa55 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	0dbc187486a77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 About a minute ago   Running             gcp-auth                                 0                   754882dcda596       gcp-auth-89d5ffd79-b6kzx
	3603c45c1e4ab       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             About a minute ago   Running             controller                               0                   31855714f04d8       ingress-nginx-controller-bc57996ff-8jlsc
	b6501ff69088d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	85a5122ba30eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	33527f5387a55       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	2b3dcba2a09e7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ea5a7e7486ae3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	5247d23b3a397       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   5faba155231dd       snapshot-controller-56fcc65765-tdxm7
	68547a0643ba6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   4cb61d4296010       csi-hostpath-resizer-0
	a2eec9453e9d3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   205f02ffaeb65       csi-hostpath-attacher-0
	d3033819602e2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ffffb6d23a520       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              patch                                    0                   0defdefc8e690       ingress-nginx-admission-patch-22v56
	adcb6aad69051       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   b44ff8bf56a7c       snapshot-controller-56fcc65765-b752p
	d7c74998aab32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              create                                   0                   92efe213e3cc9       ingress-nginx-admission-create-dgb9n
	318be751079db       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             2 minutes ago        Running             local-path-provisioner                   0                   cdfaa5befff59       local-path-provisioner-86d989889c-6xhgj
	960e66cd3823f       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   5f0be722b34e2       tiller-deploy-b48cc5f79-jcsqv
	2a650198714d3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        2 minutes ago        Running             metrics-server                           0                   a92ded8c2c84e       metrics-server-84c5f94fbc-t6sfx
	9db25418c7b36       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             2 minutes ago        Running             minikube-ingress-dns                     0                   0a160d796662b       kube-ingress-dns-minikube
	fd1c0fa2e8742       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   578052293e511       storage-provisioner
	5fc078f948938       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             2 minutes ago        Running             coredns                                  0                   dd25c29f2c98b       coredns-7c65d6cfc9-f6b44
	8953bd3ac9bbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             3 minutes ago        Running             kube-proxy                               0                   31612ec902e41       kube-proxy-7grrw
	e3e02e9338f21       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             3 minutes ago        Running             kindnet-cni                              0                   efca226e04346       kindnet-2bwl4
	f7c9dd60c650e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             3 minutes ago        Running             kube-apiserver                           0                   325d1d3961d30       kube-apiserver-addons-821781
	aef3299386ef0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             3 minutes ago        Running             etcd                                     0                   5db6677261478       etcd-addons-821781
	23817b3f6401e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             3 minutes ago        Running             kube-scheduler                           0                   192ccdf49d648       kube-scheduler-addons-821781
	319dfee9ab334       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             3 minutes ago        Running             kube-controller-manager                  0                   471807181e888       kube-controller-manager-addons-821781
	
	
	==> coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] <==
	[INFO] 10.244.0.11:54433 - 5196 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117872s
	[INFO] 10.244.0.11:55203 - 39009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079023s
	[INFO] 10.244.0.11:55203 - 18278 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066179s
	[INFO] 10.244.0.11:53992 - 3361 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005725192s
	[INFO] 10.244.0.11:53992 - 5182 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005902528s
	[INFO] 10.244.0.11:58640 - 39752 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005962306s
	[INFO] 10.244.0.11:58640 - 45636 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007442692s
	[INFO] 10.244.0.11:58081 - 46876 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004814518s
	[INFO] 10.244.0.11:58081 - 7960 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005069952s
	[INFO] 10.244.0.11:56786 - 21825 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084442s
	[INFO] 10.244.0.11:56786 - 8540 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121405s
	[INFO] 10.244.0.21:49162 - 58748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183854s
	[INFO] 10.244.0.21:60540 - 21143 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264439s
	[INFO] 10.244.0.21:57612 - 22108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123843s
	[INFO] 10.244.0.21:56370 - 29690 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174744s
	[INFO] 10.244.0.21:53939 - 42345 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115165s
	[INFO] 10.244.0.21:54191 - 30184 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102696s
	[INFO] 10.244.0.21:43721 - 49242 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007714914s
	[INFO] 10.244.0.21:58502 - 61297 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008280312s
	[INFO] 10.244.0.21:45585 - 36043 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008154564s
	[INFO] 10.244.0.21:50514 - 10749 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008661461s
	[INFO] 10.244.0.21:41083 - 31758 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006832696s
	[INFO] 10.244.0.21:53762 - 8306 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007439813s
	[INFO] 10.244.0.21:37796 - 13809 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002178233s
	[INFO] 10.244.0.21:36516 - 28559 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002337896s
	
	
	==> describe nodes <==
	Name:               addons-821781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-821781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-821781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-821781
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-821781"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-821781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:27:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:25:49 +0000   Mon, 16 Sep 2024 10:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-821781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a93a1abfd8e74fb89ecb0b25fd80b840
	  System UUID:                c474d608-aa29-4551-b357-d17e9479a01d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-b6kzx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8jlsc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m12s
	  kube-system                 coredns-7c65d6cfc9-f6b44                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m18s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 csi-hostpathplugin-pwtwp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 etcd-addons-821781                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m23s
	  kube-system                 kindnet-2bwl4                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m18s
	  kube-system                 kube-apiserver-addons-821781                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 kube-controller-manager-addons-821781       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m24s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 kube-proxy-7grrw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 kube-scheduler-addons-821781                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 metrics-server-84c5f94fbc-t6sfx             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m13s
	  kube-system                 snapshot-controller-56fcc65765-b752p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 snapshot-controller-56fcc65765-tdxm7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 tiller-deploy-b48cc5f79-jcsqv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  local-path-storage          local-path-provisioner-86d989889c-6xhgj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 3m16s  kube-proxy       
	  Normal   Starting                 3m23s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m23s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m23s  kubelet          Node addons-821781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m23s  kubelet          Node addons-821781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m23s  kubelet          Node addons-821781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m19s  node-controller  Node addons-821781 event: Registered Node addons-821781 in Controller
	  Normal   NodeReady                2m37s  kubelet          Node addons-821781 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] <==
	{"level":"warn","ts":"2024-09-16T10:24:33.965134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.284694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-09-16T10:24:33.965140Z","caller":"traceutil/trace.go:171","msg":"trace[589393049] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.482158ms","start":"2024-09-16T10:24:33.834652Z","end":"2024-09-16T10:24:33.965134Z","steps":["trace[589393049] 'agreement among raft nodes before linearized reading'  (duration: 130.392783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.112983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs\" ","response":"range_response_count:1 size:560"}
	{"level":"warn","ts":"2024-09-16T10:24:33.965172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.412831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/default\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964790Z","caller":"traceutil/trace.go:171","msg":"trace[1719481168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:1; response_revision:871; }","duration":"130.308398ms","start":"2024-09-16T10:24:33.834475Z","end":"2024-09-16T10:24:33.964784Z","steps":["trace[1719481168] 'agreement among raft nodes before linearized reading'  (duration: 130.231604ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965031Z","caller":"traceutil/trace.go:171","msg":"trace[1439753586] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:871; }","duration":"130.351105ms","start":"2024-09-16T10:24:33.834675Z","end":"2024-09-16T10:24:33.965026Z","steps":["trace[1439753586] 'agreement among raft nodes before linearized reading'  (duration: 130.285964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.622694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
	{"level":"info","ts":"2024-09-16T10:24:33.965260Z","caller":"traceutil/trace.go:171","msg":"trace[3301844] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.644948ms","start":"2024-09-16T10:24:33.834605Z","end":"2024-09-16T10:24:33.965250Z","steps":["trace[3301844] 'agreement among raft nodes before linearized reading'  (duration: 130.58562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.745393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:1 size:878"}
	{"level":"info","ts":"2024-09-16T10:24:33.965091Z","caller":"traceutil/trace.go:171","msg":"trace[630312888] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.242708ms","start":"2024-09-16T10:24:33.834842Z","end":"2024-09-16T10:24:33.965085Z","steps":["trace[630312888] 'agreement among raft nodes before linearized reading'  (duration: 130.2013ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965306Z","caller":"traceutil/trace.go:171","msg":"trace[687212945] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:1; response_revision:871; }","duration":"130.768911ms","start":"2024-09-16T10:24:33.834532Z","end":"2024-09-16T10:24:33.965301Z","steps":["trace[687212945] 'agreement among raft nodes before linearized reading'  (duration: 130.728326ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965159Z","caller":"traceutil/trace.go:171","msg":"trace[1851867066] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:871; }","duration":"130.30942ms","start":"2024-09-16T10:24:33.834844Z","end":"2024-09-16T10:24:33.965154Z","steps":["trace[1851867066] 'agreement among raft nodes before linearized reading'  (duration: 130.267065ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965180Z","caller":"traceutil/trace.go:171","msg":"trace[395277833] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.138451ms","start":"2024-09-16T10:24:33.835036Z","end":"2024-09-16T10:24:33.965175Z","steps":["trace[395277833] 'agreement among raft nodes before linearized reading'  (duration: 130.084008ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.964761Z","caller":"traceutil/trace.go:171","msg":"trace[1846466404] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.050288ms","start":"2024-09-16T10:24:33.834699Z","end":"2024-09-16T10:24:33.964750Z","steps":["trace[1846466404] 'agreement among raft nodes before linearized reading'  (duration: 129.823354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.867331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964791Z","caller":"traceutil/trace.go:171","msg":"trace[1570104672] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"101.79293ms","start":"2024-09-16T10:24:33.862992Z","end":"2024-09-16T10:24:33.964785Z","steps":["trace[1570104672] 'agreement among raft nodes before linearized reading'  (duration: 101.763738ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965421Z","caller":"traceutil/trace.go:171","msg":"trace[1827982125] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:871; }","duration":"130.890995ms","start":"2024-09-16T10:24:33.834525Z","end":"2024-09-16T10:24:33.965416Z","steps":["trace[1827982125] 'agreement among raft nodes before linearized reading'  (duration: 130.852764ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965209Z","caller":"traceutil/trace.go:171","msg":"trace[945447364] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.449227ms","start":"2024-09-16T10:24:33.834754Z","end":"2024-09-16T10:24:33.965203Z","steps":["trace[945447364] 'agreement among raft nodes before linearized reading'  (duration: 130.396497ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.001003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-16T10:24:33.965579Z","caller":"traceutil/trace.go:171","msg":"trace[1490541276] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:871; }","duration":"131.063942ms","start":"2024-09-16T10:24:33.834502Z","end":"2024-09-16T10:24:33.965566Z","steps":["trace[1490541276] 'agreement among raft nodes before linearized reading'  (duration: 130.98224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.964852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.18611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/snapshot-controller\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2024-09-16T10:24:33.965093Z","caller":"traceutil/trace.go:171","msg":"trace[1524858032] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"129.821011ms","start":"2024-09-16T10:24:33.835267Z","end":"2024-09-16T10:24:33.965088Z","steps":["trace[1524858032] 'agreement among raft nodes before linearized reading'  (duration: 129.760392ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965632Z","caller":"traceutil/trace.go:171","msg":"trace[945136232] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/snapshot-controller; range_end:; response_count:1; response_revision:871; }","duration":"129.963575ms","start":"2024-09-16T10:24:33.835661Z","end":"2024-09-16T10:24:33.965624Z","steps":["trace[945136232] 'agreement among raft nodes before linearized reading'  (duration: 129.14136ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:26.413976Z","caller":"traceutil/trace.go:171","msg":"trace[182413184] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"129.574416ms","start":"2024-09-16T10:25:26.284376Z","end":"2024-09-16T10:25:26.413950Z","steps":["trace[182413184] 'process raft request'  (duration: 67.733345ms)","trace[182413184] 'compare'  (duration: 61.701552ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:48.300626Z","caller":"traceutil/trace.go:171","msg":"trace[869038067] transaction","detail":"{read_only:false; response_revision:1265; number_of_response:1; }","duration":"110.748846ms","start":"2024-09-16T10:25:48.189856Z","end":"2024-09-16T10:25:48.300605Z","steps":["trace[869038067] 'process raft request'  (duration: 107.391476ms)"],"step_count":1}
	
	
	==> gcp-auth [0dbc187486a77d691a5db4775360d83cdf6dd7084d4c3bd9123b7e051fd6bd74] <==
	2024/09/16 10:25:47 GCP Auth Webhook started!
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	
	
	==> kernel <==
	 10:27:10 up 9 min,  0 users,  load average: 1.01, 0.74, 0.33
	Linux addons-821781 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] <==
	I0916 10:25:03.298473       1 main.go:299] handling current node
	I0916 10:25:13.302332       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:13.302385       1 main.go:299] handling current node
	I0916 10:25:23.298374       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:23.298404       1 main.go:299] handling current node
	I0916 10:25:33.299058       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:33.299118       1 main.go:299] handling current node
	I0916 10:25:43.305413       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:43.305453       1 main.go:299] handling current node
	I0916 10:25:53.299376       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:53.299407       1 main.go:299] handling current node
	I0916 10:26:03.303024       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:03.303056       1 main.go:299] handling current node
	I0916 10:26:13.305426       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:13.305472       1 main.go:299] handling current node
	I0916 10:26:23.298370       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:23.298453       1 main.go:299] handling current node
	I0916 10:26:33.300653       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:33.300694       1 main.go:299] handling current node
	I0916 10:26:43.298403       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:43.298453       1 main.go:299] handling current node
	I0916 10:26:53.299220       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:53.299254       1 main.go:299] handling current node
	I0916 10:27:03.301422       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:27:03.301456       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] <==
	W0916 10:24:33.565907       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	W0916 10:24:33.565951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.565953       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	E0916 10:24:33.565979       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:33.599472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.599513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:58.720213       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 10:24:58.720232       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:58.720259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 10:24:58.720301       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:24:58.721354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 10:24:58.721362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:25:54.202103       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:25:54.202136       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.74.143:443: connect: connection refused" logger="UnhandledError"
	E0916 10:25:54.202195       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:25:54.215066       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:26:47.647164       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:26:48.662402       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:26:53.534738       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.40.159"}
	
	
	==> kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] <==
	I0916 10:26:25.049481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="8.748µs"
	I0916 10:26:35.156432       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0916 10:26:42.155250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="5.246µs"
	E0916 10:26:48.663809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:26:49.484100       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:26:49.484140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:26:51.446490       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:26:51.446537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:26:51.922555       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:26:51.922598       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:26:52.320583       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:26:52.320624       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:26:53.601416       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="52.049957ms"
	I0916 10:26:53.606395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="4.851613ms"
	I0916 10:26:53.606494       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="58.578µs"
	I0916 10:26:53.610282       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="43.744µs"
	W0916 10:26:55.044011       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:26:55.044048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:26:57.755257       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:26:57.926605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="51.47µs"
	I0916 10:26:57.939305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.337707ms"
	I0916 10:26:57.939375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="37.082µs"
	I0916 10:27:04.034685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="8.781µs"
	W0916 10:27:04.365551       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:04.365591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] <==
	I0916 10:23:52.638596       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:52.921753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:52.922374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:53.313675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:53.319718       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:53.497957       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:53.508623       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:53.508659       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:53.510794       1 config.go:199] "Starting service config controller"
	I0916 10:23:53.510833       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:53.510868       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:53.510874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:53.511480       1 config.go:328] "Starting node config controller"
	I0916 10:23:53.511491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:53.617474       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:53.617556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:23:53.711794       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] <==
	W0916 10:23:44.897301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0916 10:23:44.897124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:44.898296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:44.897140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:44.898337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:44.898344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:45.722888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:45.722927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.731239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.731280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.734491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:23:45.734527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.741804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.741845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.771121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:45.771158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.886831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.886867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.913242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.913290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:46.023935       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:46.023972       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:23:48.220429       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.598994    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="67154ffe-2780-45e7-a660-49554e711676" containerName="patch"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.599001    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c41e69-8354-4fce-98a3-99b23a9ab570" containerName="registry"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.599008    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5926c8c-4733-4e81-b884-6c31cfffb072" containerName="create"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.599015    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9415ac6-16e5-4e32-8d52-7f3dc1c3dc38" containerName="cloud-spanner-emulator"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.599023    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7feeb78-9d18-4383-bd20-fdc91348b8c6" containerName="create"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.599030    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="2432b1c2-ccad-4646-9941-b5be3a66cf1b" containerName="gadget"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.599038    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="2432b1c2-ccad-4646-9941-b5be3a66cf1b" containerName="gadget"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.734894    1623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2w7k\" (UniqueName: \"kubernetes.io/projected/cad0d003-8455-4239-998d-1327610acea6-kube-api-access-x2w7k\") pod \"headlamp-57fb76fcdb-xfkdj\" (UID: \"cad0d003-8455-4239-998d-1327610acea6\") " pod="headlamp/headlamp-57fb76fcdb-xfkdj"
	Sep 16 10:26:53 addons-821781 kubelet[1623]: I0916 10:26:53.734938    1623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cad0d003-8455-4239-998d-1327610acea6-gcp-creds\") pod \"headlamp-57fb76fcdb-xfkdj\" (UID: \"cad0d003-8455-4239-998d-1327610acea6\") " pod="headlamp/headlamp-57fb76fcdb-xfkdj"
	Sep 16 10:26:57 addons-821781 kubelet[1623]: E0916 10:26:57.227232    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482417227071107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:469506,},InodesUsed:&UInt64Value{Value:188,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:57 addons-821781 kubelet[1623]: E0916 10:26:57.227269    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482417227071107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:469506,},InodesUsed:&UInt64Value{Value:188,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:57 addons-821781 kubelet[1623]: I0916 10:26:57.925371    1623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-57fb76fcdb-xfkdj" podStartSLOduration=1.092461955 podStartE2EDuration="4.925327313s" podCreationTimestamp="2024-09-16 10:26:53 +0000 UTC" firstStartedPulling="2024-09-16 10:26:53.933051174 +0000 UTC m=+186.920335600" lastFinishedPulling="2024-09-16 10:26:57.765916532 +0000 UTC m=+190.753200958" observedRunningTime="2024-09-16 10:26:57.924456307 +0000 UTC m=+190.911740791" watchObservedRunningTime="2024-09-16 10:26:57.925327313 +0000 UTC m=+190.912611757"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.353069    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2w7k\" (UniqueName: \"kubernetes.io/projected/cad0d003-8455-4239-998d-1327610acea6-kube-api-access-x2w7k\") pod \"cad0d003-8455-4239-998d-1327610acea6\" (UID: \"cad0d003-8455-4239-998d-1327610acea6\") "
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.353125    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cad0d003-8455-4239-998d-1327610acea6-gcp-creds\") pod \"cad0d003-8455-4239-998d-1327610acea6\" (UID: \"cad0d003-8455-4239-998d-1327610acea6\") "
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.353208    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cad0d003-8455-4239-998d-1327610acea6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "cad0d003-8455-4239-998d-1327610acea6" (UID: "cad0d003-8455-4239-998d-1327610acea6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.354914    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad0d003-8455-4239-998d-1327610acea6-kube-api-access-x2w7k" (OuterVolumeSpecName: "kube-api-access-x2w7k") pod "cad0d003-8455-4239-998d-1327610acea6" (UID: "cad0d003-8455-4239-998d-1327610acea6"). InnerVolumeSpecName "kube-api-access-x2w7k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.454005    1623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x2w7k\" (UniqueName: \"kubernetes.io/projected/cad0d003-8455-4239-998d-1327610acea6-kube-api-access-x2w7k\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.454041    1623 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cad0d003-8455-4239-998d-1327610acea6-gcp-creds\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.931819    1623 scope.go:117] "RemoveContainer" containerID="34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.946973    1623 scope.go:117] "RemoveContainer" containerID="34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: E0916 10:27:04.947443    1623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557\": container with ID starting with 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 not found: ID does not exist" containerID="34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.947483    1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"} err="failed to get container status \"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557\": rpc error: code = NotFound desc = could not find container \"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557\": container with ID starting with 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 not found: ID does not exist"
	Sep 16 10:27:05 addons-821781 kubelet[1623]: I0916 10:27:05.108889    1623 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad0d003-8455-4239-998d-1327610acea6" path="/var/lib/kubelet/pods/cad0d003-8455-4239-998d-1327610acea6/volumes"
	Sep 16 10:27:07 addons-821781 kubelet[1623]: E0916 10:27:07.229157    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482427229008349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:07 addons-821781 kubelet[1623]: E0916 10:27:07.229199    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482427229008349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [fd1c0fa2e8742125904216a45b6d84f9b367888422cb6083d3e482fd77452994] <==
	I0916 10:24:34.797513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:34.805288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:34.805397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:34.813404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:34.813588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	I0916 10:24:34.814304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d6ca95d-581a-4537-b803-ac9e02f43ec1", APIVersion:"v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4 became leader
	I0916 10:24:34.914571       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-821781 -n addons-821781
helpers_test.go:261: (dbg) Run:  kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (435.75µs)
helpers_test.go:263: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Ingress (2.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (323.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 8.46142ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003849622s
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (312.637µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (456.542µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (458.786µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (405.414µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (384.097µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (456.234µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (421.541µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (505.324µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (430.635µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (425.824µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (550.754µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-821781 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-821781 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (488.445µs)
addons_test.go:431: failed checking metric server: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-821781
helpers_test.go:235: (dbg) docker inspect addons-821781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9",
	        "Created": "2024-09-16T10:23:34.422231958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:34.564816551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hosts",
	        "LogPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9-json.log",
	        "Name": "/addons-821781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-821781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-821781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-821781",
	                "Source": "/var/lib/docker/volumes/addons-821781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-821781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-821781",
	                "name.minikube.sigs.k8s.io": "addons-821781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb89cb54fc4711f104a02c8d2ebaaa0dae68769e21054477c7dd719ee876c61d",
	            "SandboxKey": "/var/run/docker/netns/cb89cb54fc47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-821781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "66d8d4a2fe0f9ff012a57288f3992a27df27bc2a73eb33a40ff3adbc0fa270ea",
	                    "EndpointID": "54da588c62c62ca60fdaac7dbe299e76b7fad63e791a3bfc770a096d3640b2fb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-821781",
	                        "60dd933522c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-821781 -n addons-821781
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-821781 logs -n 25: (1.233918069s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-534059              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-920673              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-291625               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-291625            | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-597115                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44611               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-597115              | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | disable dashboard -p                 | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| start   | -p addons-821781 --wait=true         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:26 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| ip      | addons-821781 ip                     | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons                 | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:11.785613   12642 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:11.786005   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786020   12642 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:11.786026   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786201   12642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:23:11.786846   12642 out.go:352] Setting JSON to false
	I0916 10:23:11.787652   12642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":332,"bootTime":1726481860,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:11.787744   12642 start.go:139] virtualization: kvm guest
	I0916 10:23:11.789971   12642 out.go:177] * [addons-821781] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:11.791581   12642 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:11.791602   12642 notify.go:220] Checking for updates...
	I0916 10:23:11.793279   12642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:11.794876   12642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:11.796234   12642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:23:11.797605   12642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:11.798881   12642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:11.800381   12642 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:11.822354   12642 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:11.822435   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.875294   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.865218731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.875392   12642 docker.go:318] overlay module found
	I0916 10:23:11.877179   12642 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:11.878539   12642 start.go:297] selected driver: docker
	I0916 10:23:11.878555   12642 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:11.878567   12642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:11.879376   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.928080   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.918595521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.928248   12642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:11.928460   12642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:11.930314   12642 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:11.931824   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:11.931880   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:11.931896   12642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:11.931970   12642 start.go:340] cluster config:
	{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:11.933478   12642 out.go:177] * Starting "addons-821781" primary control-plane node in "addons-821781" cluster
	I0916 10:23:11.934979   12642 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:23:11.936645   12642 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:11.938033   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:11.938077   12642 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:23:11.938086   12642 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:11.938151   12642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:11.938181   12642 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:11.938195   12642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:23:11.938528   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:11.938559   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json: {Name:mkb2d65543ac9e0f1211fb3bb619eaf59705ab34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:11.954455   12642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:11.954550   12642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:11.954565   12642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:11.954570   12642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:11.954578   12642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:11.954585   12642 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:24.468174   12642 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:24.468219   12642 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:24.468270   12642 start.go:360] acquireMachinesLock for addons-821781: {Name:mk2b69b21902e1a037d888f1a4c14b20c068c000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:24.468392   12642 start.go:364] duration metric: took 101µs to acquireMachinesLock for "addons-821781"
	I0916 10:23:24.468422   12642 start.go:93] Provisioning new machine with config: &{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:24.468511   12642 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:24.470800   12642 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:24.471033   12642 start.go:159] libmachine.API.Create for "addons-821781" (driver="docker")
	I0916 10:23:24.471057   12642 client.go:168] LocalClient.Create starting
	I0916 10:23:24.471161   12642 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:23:24.563569   12642 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:23:24.843226   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:24.859906   12642 cli_runner.go:211] docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:24.859982   12642 network_create.go:284] running [docker network inspect addons-821781] to gather additional debugging logs...
	I0916 10:23:24.860006   12642 cli_runner.go:164] Run: docker network inspect addons-821781
	W0916 10:23:24.875695   12642 cli_runner.go:211] docker network inspect addons-821781 returned with exit code 1
	I0916 10:23:24.875725   12642 network_create.go:287] error running [docker network inspect addons-821781]: docker network inspect addons-821781: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-821781 not found
	I0916 10:23:24.875736   12642 network_create.go:289] output of [docker network inspect addons-821781]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-821781 not found
	
	** /stderr **
	I0916 10:23:24.875825   12642 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:24.892396   12642 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019c5ea0}
	I0916 10:23:24.892450   12642 network_create.go:124] attempt to create docker network addons-821781 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:24.892494   12642 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-821781 addons-821781
	I0916 10:23:24.956362   12642 network_create.go:108] docker network addons-821781 192.168.49.0/24 created
	I0916 10:23:24.956397   12642 kic.go:121] calculated static IP "192.168.49.2" for the "addons-821781" container
	I0916 10:23:24.956461   12642 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:24.972596   12642 cli_runner.go:164] Run: docker volume create addons-821781 --label name.minikube.sigs.k8s.io=addons-821781 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:24.991422   12642 oci.go:103] Successfully created a docker volume addons-821781
	I0916 10:23:24.991492   12642 cli_runner.go:164] Run: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:29.942508   12642 cli_runner.go:217] Completed: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.950978249s)
	I0916 10:23:29.942530   12642 oci.go:107] Successfully prepared a docker volume addons-821781
	I0916 10:23:29.942541   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:29.942558   12642 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:29.942601   12642 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:34.358289   12642 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.415644078s)
	I0916 10:23:34.358318   12642 kic.go:203] duration metric: took 4.415757339s to extract preloaded images to volume ...
	W0916 10:23:34.358449   12642 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:34.358539   12642 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:34.407126   12642 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-821781 --name addons-821781 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-821781 --network addons-821781 --ip 192.168.49.2 --volume addons-821781:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:34.740907   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Running}}
	I0916 10:23:34.761456   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:34.779743   12642 cli_runner.go:164] Run: docker exec addons-821781 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:34.825817   12642 oci.go:144] the created container "addons-821781" has a running status.
	I0916 10:23:34.825843   12642 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa...
	I0916 10:23:35.044132   12642 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:35.071224   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.090107   12642 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:35.090127   12642 kic_runner.go:114] Args: [docker exec --privileged addons-821781 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:35.145473   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.163175   12642 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:35.163257   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.181284   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.181510   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.181525   12642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:35.376812   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.376844   12642 ubuntu.go:169] provisioning hostname "addons-821781"
	I0916 10:23:35.376907   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.394400   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.394569   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.394582   12642 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-821781 && echo "addons-821781" | sudo tee /etc/hostname
	I0916 10:23:35.535760   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.535841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.554208   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.554394   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.554410   12642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-821781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-821781/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-821781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:35.685491   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:35.685520   12642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:23:35.685538   12642 ubuntu.go:177] setting up certificates
	I0916 10:23:35.685549   12642 provision.go:84] configureAuth start
	I0916 10:23:35.685599   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:35.701932   12642 provision.go:143] copyHostCerts
	I0916 10:23:35.702012   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:23:35.702151   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:23:35.702230   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:23:35.702295   12642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.addons-821781 san=[127.0.0.1 192.168.49.2 addons-821781 localhost minikube]
	I0916 10:23:35.783034   12642 provision.go:177] copyRemoteCerts
	I0916 10:23:35.783097   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:35.783127   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.800161   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:35.893913   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:23:35.915296   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:23:35.937405   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:35.959050   12642 provision.go:87] duration metric: took 273.490922ms to configureAuth
	I0916 10:23:35.959082   12642 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:35.959246   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:35.959337   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.977055   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.977247   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.977264   12642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:23:36.194829   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:23:36.194851   12642 machine.go:96] duration metric: took 1.031655385s to provisionDockerMachine
	I0916 10:23:36.194860   12642 client.go:171] duration metric: took 11.723797841s to LocalClient.Create
	I0916 10:23:36.194875   12642 start.go:167] duration metric: took 11.723845183s to libmachine.API.Create "addons-821781"
	I0916 10:23:36.194883   12642 start.go:293] postStartSetup for "addons-821781" (driver="docker")
	I0916 10:23:36.194895   12642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:36.194953   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:36.194987   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.212136   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.306296   12642 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:36.309608   12642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:36.309638   12642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:36.309646   12642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:36.309652   12642 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:36.309662   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:23:36.309721   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:23:36.309744   12642 start.go:296] duration metric: took 114.855265ms for postStartSetup
	I0916 10:23:36.310017   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.326531   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:36.326849   12642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:36.326901   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.343127   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.434151   12642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:36.438063   12642 start.go:128] duration metric: took 11.969538805s to createHost
	I0916 10:23:36.438087   12642 start.go:83] releasing machines lock for "addons-821781", held for 11.96968194s
	I0916 10:23:36.438170   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.454099   12642 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:36.454144   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.454204   12642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:36.454276   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.472027   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.473599   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.640610   12642 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:36.644626   12642 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:23:36.780722   12642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:36.785109   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.802933   12642 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:36.803016   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.830084   12642 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:36.830106   12642 start.go:495] detecting cgroup driver to use...
	I0916 10:23:36.830135   12642 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:36.830178   12642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:23:36.843678   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:23:36.854207   12642 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:36.854255   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:36.867323   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:36.880430   12642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:36.955777   12642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:37.035979   12642 docker.go:233] disabling docker service ...
	I0916 10:23:37.036049   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:37.052780   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:37.063200   12642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:37.138165   12642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:37.215004   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:37.225051   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:37.239114   12642 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:23:37.239176   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.248375   12642 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:23:37.248431   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.257180   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.265957   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.274955   12642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:37.283271   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.291833   12642 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.305478   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.314242   12642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:37.321530   12642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:37.328860   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.397743   12642 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:23:37.494696   12642 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:23:37.494784   12642 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:23:37.498069   12642 start.go:563] Will wait 60s for crictl version
	I0916 10:23:37.498121   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:23:37.501763   12642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:37.533845   12642 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:23:37.533971   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.568210   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.602768   12642 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:23:37.604266   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:37.620164   12642 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:37.623594   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.633351   12642 kubeadm.go:883] updating cluster {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:37.633481   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:37.633537   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.691488   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.691513   12642 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:23:37.691557   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.721834   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.721855   12642 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:37.721863   12642 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:23:37.721943   12642 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-821781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:37.722004   12642 ssh_runner.go:195] Run: crio config
	I0916 10:23:37.761799   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:37.761826   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:37.761837   12642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:37.761858   12642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-821781 NodeName:addons-821781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:37.761998   12642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-821781"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:37.762053   12642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:37.770243   12642 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:37.770305   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:37.778774   12642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:23:37.794482   12642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:37.810783   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0916 10:23:37.827097   12642 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:37.830351   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.840395   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.914798   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:37.926573   12642 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781 for IP: 192.168.49.2
	I0916 10:23:37.926602   12642 certs.go:194] generating shared ca certs ...
	I0916 10:23:37.926624   12642 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:37.926767   12642 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:23:38.165524   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt ...
	I0916 10:23:38.165552   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt: {Name:mk958b9d7b4e596cca12a43812b033701a1808ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165715   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key ...
	I0916 10:23:38.165727   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key: {Name:mk218c15b5e68b365653a5a88f283b4fd2a63397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165796   12642 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:23:38.317748   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt ...
	I0916 10:23:38.317782   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt: {Name:mke289e24f4d60c196cc49c14787f9db71cc62b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.317972   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key ...
	I0916 10:23:38.317984   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key: {Name:mk238a3132478eab5de811cbc3626e41ad1154f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.318059   12642 certs.go:256] generating profile certs ...
	I0916 10:23:38.318110   12642 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key
	I0916 10:23:38.318136   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt with IP's: []
	I0916 10:23:38.579861   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt ...
	I0916 10:23:38.579894   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: {Name:mk21e84efd5822ab69a95d39a845706a794c0061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580087   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key ...
	I0916 10:23:38.580102   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key: {Name:mkafbaeecfaf57db916f1469c60f36a7c0603c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580202   12642 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e
	I0916 10:23:38.580226   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:38.661523   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e ...
	I0916 10:23:38.661551   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e: {Name:mk3603fd200d1d0c9c664f1f9e2d3f37d0da819e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661721   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e ...
	I0916 10:23:38.661734   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e: {Name:mk979e39754dc7623208af4e4f8346a3268b5e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661802   12642 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt
	I0916 10:23:38.661872   12642 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key
	I0916 10:23:38.661916   12642 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key
	I0916 10:23:38.661934   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt with IP's: []
	I0916 10:23:38.868848   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt ...
	I0916 10:23:38.868882   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt: {Name:mk60143e6be001872095f4a07cc8800f3883cb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869061   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key ...
	I0916 10:23:38.869072   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key: {Name:mkfcb902307b78d6d49e6123539922887bdc7bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869254   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:38.869291   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:23:38.869321   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:38.869365   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:38.869947   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:38.891875   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:38.913044   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:38.935301   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:38.957638   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:38.978769   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:38.999283   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:39.020509   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:39.041006   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:39.062022   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:39.077689   12642 ssh_runner.go:195] Run: openssl version
	I0916 10:23:39.082828   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:39.091794   12642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094851   12642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094909   12642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.101357   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:39.110237   12642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:39.113275   12642 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:39.113343   12642 kubeadm.go:392] StartCluster: {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:39.113424   12642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:39.113461   12642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:39.147213   12642 cri.go:89] found id: ""
	I0916 10:23:39.147277   12642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:39.155102   12642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:39.162655   12642 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:39.162713   12642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:39.170269   12642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:39.170287   12642 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:39.170331   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:39.177944   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:39.178006   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:39.185617   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:39.193448   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:39.193494   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:39.201778   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.209504   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:39.209560   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.217167   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:39.224794   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:39.224851   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:39.232091   12642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:39.267943   12642 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:39.268041   12642 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:39.285854   12642 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:39.285924   12642 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:39.285968   12642 kubeadm.go:310] OS: Linux
	I0916 10:23:39.286011   12642 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:39.286080   12642 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:39.286143   12642 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:39.286205   12642 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:39.286307   12642 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:39.286389   12642 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:39.286430   12642 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:39.286498   12642 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:39.286566   12642 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:39.334020   12642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:39.334137   12642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:39.334277   12642 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:39.339811   12642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:39.342965   12642 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:39.343081   12642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:39.343174   12642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:39.501471   12642 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:39.656891   12642 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:39.803369   12642 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:39.956554   12642 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:40.122217   12642 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:40.122346   12642 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.178788   12642 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:40.178946   12642 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.253274   12642 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:40.444072   12642 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:40.539814   12642 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:40.539908   12642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:40.740107   12642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:40.805609   12642 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:41.114974   12642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:41.183175   12642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:41.287722   12642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:41.288131   12642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:41.290675   12642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:41.293432   12642 out.go:235]   - Booting up control plane ...
	I0916 10:23:41.293554   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:41.293636   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:41.293726   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:41.302536   12642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:41.307914   12642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:41.307975   12642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:41.387469   12642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:41.387659   12642 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:41.889098   12642 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.704632ms
	I0916 10:23:41.889216   12642 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:46.391264   12642 kubeadm.go:310] [api-check] The API server is healthy after 4.502175176s
	I0916 10:23:46.402989   12642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:46.412298   12642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:46.429664   12642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:46.429953   12642 kubeadm.go:310] [mark-control-plane] Marking the node addons-821781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:46.439045   12642 kubeadm.go:310] [bootstrap-token] Using token: 08e8kf.82j5psgo1mt86ygt
	I0916 10:23:46.440988   12642 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:46.441118   12642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:46.443591   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:46.448741   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:46.451033   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:46.453482   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:46.457052   12642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:46.798062   12642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:47.220263   12642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:47.797780   12642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:47.798623   12642 kubeadm.go:310] 
	I0916 10:23:47.798710   12642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:47.798722   12642 kubeadm.go:310] 
	I0916 10:23:47.798838   12642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:47.798858   12642 kubeadm.go:310] 
	I0916 10:23:47.798897   12642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:47.798955   12642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:47.799030   12642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:47.799050   12642 kubeadm.go:310] 
	I0916 10:23:47.799117   12642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:47.799125   12642 kubeadm.go:310] 
	I0916 10:23:47.799191   12642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:47.799202   12642 kubeadm.go:310] 
	I0916 10:23:47.799273   12642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:47.799371   12642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:47.799433   12642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:47.799458   12642 kubeadm.go:310] 
	I0916 10:23:47.799618   12642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:47.799702   12642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:47.799727   12642 kubeadm.go:310] 
	I0916 10:23:47.799855   12642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800005   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:23:47.800028   12642 kubeadm.go:310] 	--control-plane 
	I0916 10:23:47.800034   12642 kubeadm.go:310] 
	I0916 10:23:47.800137   12642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:47.800147   12642 kubeadm.go:310] 
	I0916 10:23:47.800244   12642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800384   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:23:47.802505   12642 kubeadm.go:310] W0916 10:23:39.265300    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.802965   12642 kubeadm.go:310] W0916 10:23:39.265967    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.803297   12642 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:47.803488   12642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:47.803508   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:47.803517   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:47.805594   12642 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:47.806930   12642 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:47.811723   12642 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:47.811744   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:47.829314   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:48.045373   12642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:48.045433   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.045434   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-821781 minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-821781 minikube.k8s.io/primary=true
	I0916 10:23:48.053143   12642 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:48.121750   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.622580   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.121829   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.622144   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.122640   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.622473   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.122549   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.622693   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.122279   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.622129   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.815735   12642 kubeadm.go:1113] duration metric: took 4.770357411s to wait for elevateKubeSystemPrivileges
	I0916 10:23:52.815769   12642 kubeadm.go:394] duration metric: took 13.702442151s to StartCluster
	I0916 10:23:52.815790   12642 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.815914   12642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:52.816324   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.816539   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:52.816545   12642 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:52.816616   12642 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:52.816735   12642 addons.go:69] Setting yakd=true in profile "addons-821781"
	I0916 10:23:52.816749   12642 addons.go:69] Setting ingress-dns=true in profile "addons-821781"
	I0916 10:23:52.816756   12642 addons.go:69] Setting default-storageclass=true in profile "addons-821781"
	I0916 10:23:52.816766   12642 addons.go:69] Setting inspektor-gadget=true in profile "addons-821781"
	I0916 10:23:52.816771   12642 addons.go:234] Setting addon ingress-dns=true in "addons-821781"
	I0916 10:23:52.816777   12642 addons.go:234] Setting addon inspektor-gadget=true in "addons-821781"
	I0916 10:23:52.816781   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.816788   12642 addons.go:69] Setting cloud-spanner=true in profile "addons-821781"
	I0916 10:23:52.816798   12642 addons.go:234] Setting addon cloud-spanner=true in "addons-821781"
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816821   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816815   12642 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-821781"
	I0916 10:23:52.816831   12642 addons.go:69] Setting volumesnapshots=true in profile "addons-821781"
	I0916 10:23:52.816846   12642 addons.go:234] Setting addon volumesnapshots=true in "addons-821781"
	I0916 10:23:52.816852   12642 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-821781"
	I0916 10:23:52.816859   12642 addons.go:69] Setting gcp-auth=true in profile "addons-821781"
	I0916 10:23:52.816864   12642 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-821781"
	I0916 10:23:52.816869   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816875   12642 mustload.go:65] Loading cluster: addons-821781
	I0916 10:23:52.816879   12642 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:52.816885   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816897   12642 addons.go:69] Setting ingress=true in profile "addons-821781"
	I0916 10:23:52.816908   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816914   12642 addons.go:234] Setting addon ingress=true in "addons-821781"
	I0916 10:23:52.816821   12642 addons.go:69] Setting storage-provisioner=true in profile "addons-821781"
	I0916 10:23:52.816951   12642 addons.go:234] Setting addon storage-provisioner=true in "addons-821781"
	I0916 10:23:52.816952   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816967   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816991   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.817237   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817375   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816847   12642 addons.go:69] Setting helm-tiller=true in profile "addons-821781"
	I0916 10:23:52.817387   12642 addons.go:69] Setting registry=true in profile "addons-821781"
	I0916 10:23:52.817393   12642 addons.go:234] Setting addon helm-tiller=true in "addons-821781"
	I0916 10:23:52.817398   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817399   12642 addons.go:234] Setting addon registry=true in "addons-821781"
	I0916 10:23:52.817413   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817421   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817453   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817460   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817835   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817839   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.818548   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816758   12642 addons.go:234] Setting addon yakd=true in "addons-821781"
	I0916 10:23:52.818812   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816831   12642 addons.go:69] Setting metrics-server=true in profile "addons-821781"
	I0916 10:23:52.819624   12642 addons.go:234] Setting addon metrics-server=true in "addons-821781"
	I0916 10:23:52.819661   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816777   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-821781"
	I0916 10:23:52.820048   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820121   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820925   12642 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:52.817377   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.823819   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:52.819369   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817378   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816830   12642 addons.go:69] Setting volcano=true in profile "addons-821781"
	I0916 10:23:52.827260   12642 addons.go:234] Setting addon volcano=true in "addons-821781"
	I0916 10:23:52.827341   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.827903   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816822   12642 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-821781"
	I0916 10:23:52.828667   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-821781"
	I0916 10:23:52.846468   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.849708   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.849779   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.858180   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:52.860117   12642 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:52.861491   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:52.861515   12642 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:52.861580   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.861792   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:52.863536   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:52.865265   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:52.868592   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:52.871812   12642 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:52.873467   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:52.873491   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:52.873553   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.873826   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:52.875500   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:52.876891   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:52.878274   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:52.878295   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:52.878358   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.885380   12642 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:52.887180   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:52.887200   12642 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:52.887253   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.887590   12642 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:52.889278   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:52.889293   12642 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:52.891126   12642 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:52.891146   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:52.891207   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.891375   12642 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:52.893052   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.893213   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:52.893225   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:52.893284   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.895906   12642 addons.go:234] Setting addon default-storageclass=true in "addons-821781"
	I0916 10:23:52.895950   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.896395   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.902602   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.904755   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:52.904779   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:52.904841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.913208   12642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:52.916490   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:52.916516   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:52.916578   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.920102   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.921373   12642 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:52.924287   12642 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:52.924310   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:52.924367   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.924567   12642 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:52.924966   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.927248   12642 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:52.927271   12642 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:52.927324   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	W0916 10:23:52.939182   12642 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:23:52.945562   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.947311   12642 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:52.949640   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:52.949813   12642 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:52.949828   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:52.949883   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.950915   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:52.950951   12642 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:52.951010   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.967061   12642 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-821781"
	I0916 10:23:52.967112   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.967600   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.976558   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.977128   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979407   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979587   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979666   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982295   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982301   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984209   12642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:52.984228   12642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:52.984267   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984282   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.985867   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.992433   12642 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:52.996036   12642 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:52.998876   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:52.998899   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:52.998966   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:53.007398   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.031542   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.198285   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:53.222232   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:53.223607   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:53.303303   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:53.303391   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:53.412003   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:53.494460   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:53.495317   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:53.495388   12642 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:53.500279   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:53.500366   12642 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:53.518431   12642 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:53.518460   12642 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:53.595357   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:53.595389   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:53.595502   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:53.595520   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:53.601235   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:53.601265   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:53.603514   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:53.610819   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:53.613851   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:53.696891   12642 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:53.696920   12642 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:53.697186   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:53.711949   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:53.711981   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:53.793955   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:53.794047   12642 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:53.795627   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:53.795652   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:53.810579   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:53.810623   12642 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:53.818121   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:53.818143   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:54.008884   12642 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:54.008915   12642 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:54.097416   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:54.097502   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:54.105048   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:54.114541   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:54.116113   12642 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.116175   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:54.194093   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:54.194181   12642 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:54.310015   12642 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:54.310107   12642 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:54.315950   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:54.316029   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:54.409828   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.595664   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:54.595750   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:54.795049   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:54.795131   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:54.795986   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:54.796042   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:54.798857   12642 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.60047423s)
	I0916 10:23:54.798970   12642 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:54.798946   12642 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.576635993s)
	I0916 10:23:54.799977   12642 node_ready.go:35] waiting up to 6m0s for node "addons-821781" to be "Ready" ...
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:54.816489   12642 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:54.816544   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:55.096307   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:55.096398   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:55.098163   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:55.303720   12642 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:55.303802   12642 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:55.310866   12642 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:55.310939   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:55.509740   12642 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-821781" context rescaled to 1 replicas
	I0916 10:23:55.603909   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:55.603992   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:55.609116   12642 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:55.609197   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:55.701381   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:56.095470   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:56.095499   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:56.106357   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:56.115945   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.892303376s)
	I0916 10:23:56.209795   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:56.209873   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:56.410426   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:56.410515   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:56.511332   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.511408   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:56.813818   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.895029   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:58.497986   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.085861545s)
	I0916 10:23:58.498185   12642 addons.go:475] Verifying addon ingress=true in "addons-821781"
	I0916 10:23:58.498214   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.894594589s)
	I0916 10:23:58.498365   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.801136889s)
	I0916 10:23:58.498429   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.393306067s)
	I0916 10:23:58.498499   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.383877389s)
	I0916 10:23:58.498516   12642 addons.go:475] Verifying addon metrics-server=true in "addons-821781"
	I0916 10:23:58.498551   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.08869279s)
	I0916 10:23:58.498561   12642 addons.go:475] Verifying addon registry=true in "addons-821781"
	I0916 10:23:58.498687   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.40044143s)
	I0916 10:23:58.498148   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.003579441s)
	I0916 10:23:58.498265   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.887343223s)
	I0916 10:23:58.498721   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.884394452s)
	I0916 10:23:58.500166   12642 out.go:177] * Verifying registry addon...
	I0916 10:23:58.500186   12642 out.go:177] * Verifying ingress addon...
	I0916 10:23:58.500168   12642 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-821781 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:58.502840   12642 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:23:58.502984   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0916 10:23:58.505976   12642 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:23:58.508066   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:58.508081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:58.508299   12642 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:23:58.508315   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.012329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.110843   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.299182   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.597694462s)
	W0916 10:23:59.299228   12642 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299250   12642 retry.go:31] will retry after 144.288551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299277   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.19282086s)
	I0916 10:23:59.305158   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:59.444238   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.506924   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.507806   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.539307   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.725399907s)
	I0916 10:23:59.539335   12642 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:59.541718   12642 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:59.543660   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:59.597366   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:59.597452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.006951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.007539   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.096393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.099134   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:00.099205   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.125424   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.418412   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:00.508361   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.509838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.518754   12642 addons.go:234] Setting addon gcp-auth=true in "addons-821781"
	I0916 10:24:00.518809   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:24:00.519365   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:24:00.536851   12642 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:00.536902   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.553493   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.596428   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.006170   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.006803   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.047121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.506287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.506534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.547185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.805560   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:02.007448   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.008038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.046600   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.202834   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.758545356s)
	I0916 10:24:02.202854   12642 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.665973141s)
	I0916 10:24:02.205053   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:02.206664   12642 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:02.208283   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:02.208296   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:02.226305   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:02.226333   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:02.244167   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.244187   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:02.298853   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.506489   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.506968   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.547297   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.899621   12642 addons.go:475] Verifying addon gcp-auth=true in "addons-821781"
	I0916 10:24:02.901591   12642 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:02.904224   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:02.907029   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:02.907051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.007207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.007880   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.047134   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.407111   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.506509   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.507075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.547522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.907027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.007265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.007643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.046594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.303245   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:04.407879   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.506365   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.506939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.547412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.907817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.006397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.007232   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.047038   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.407918   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.506892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.507154   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.547266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.907671   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.006358   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.006625   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.046717   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.407766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.506364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.506750   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.547000   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.803631   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:06.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.006037   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.006551   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.046971   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.407314   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.506338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.547256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.907021   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.005785   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.006334   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.046439   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.408357   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.505952   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.506643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.547247   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.803661   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:08.907343   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.006189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.046966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.407657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.506182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.506608   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.546942   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.907283   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.005977   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.006337   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.046685   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.408104   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.507241   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.547393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.907115   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.005778   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.006115   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.047296   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.302797   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:11.407398   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.506075   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.506794   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.546885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.907330   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.006053   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.046997   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.407912   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.506528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.507006   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.547228   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.907413   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.006062   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.006437   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.303472   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:13.407845   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.506423   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.547162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.907106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.005737   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.006410   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.047326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.407189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.505915   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.506316   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.547399   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.907535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.007080   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.046972   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.407693   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.506219   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.506709   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.547052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.803455   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:15.907823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.006647   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.007106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.047456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.407960   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.506331   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.547157   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.907551   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.006299   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.006617   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.047040   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.406899   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.506449   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.506938   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.547210   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.907861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.006488   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.006990   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.046795   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.303390   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:18.408194   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.505660   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.506075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.547467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.908947   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.006658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.007120   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.047574   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.407694   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.506237   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.506764   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.546743   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.907775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.006250   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.006926   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.046950   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.407914   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.506444   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.506893   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.547165   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.802891   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:20.908266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.006168   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.006661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.046763   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.407620   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.506280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.506758   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.547207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.907808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.006390   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.006832   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.047258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.407294   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.506192   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.506573   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.546892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.803612   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:22.907631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.006412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.006789   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.407703   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.506242   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.506922   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.546531   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.907989   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.006557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.007064   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.047256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.407245   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.506027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.506326   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.546265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.907143   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.006149   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.006574   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.303085   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:25.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.506502   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.506958   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.549041   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.907130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.005689   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.006094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.047573   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.407949   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.506465   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.506873   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.547130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.907930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.006498   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.006899   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.047132   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.303541   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:27.407076   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.505560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.506083   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.547418   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.907322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.006007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.006289   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.046769   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.506106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.506493   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.547121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.907052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.005692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.006125   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.047636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.407566   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.506440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.506780   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.547158   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.802646   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:29.907185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.005875   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.006320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.046391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.407344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.505998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.506431   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.546833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.006755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.007344   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.047565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.407650   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.506485   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.506906   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.547281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.803334   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:31.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.006411   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.006716   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.047171   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.407108   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.505792   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.506357   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.547493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.907787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.007161   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.047511   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.407346   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.506125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.506509   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.547645   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.803187   12642 node_ready.go:49] node "addons-821781" has status "Ready":"True"
	I0916 10:24:33.803213   12642 node_ready.go:38] duration metric: took 39.003174602s for node "addons-821781" to be "Ready" ...
	I0916 10:24:33.803225   12642 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:33.970599   12642 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:34.069001   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.088106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.088355   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:34.088380   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.088736   12642 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:34.088757   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.407852   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.508926   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.509671   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.609806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.907890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.006456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.006807   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.047745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.407857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.476382   12642 pod_ready.go:93] pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.476406   12642 pod_ready.go:82] duration metric: took 1.50577246s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.476429   12642 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480336   12642 pod_ready.go:93] pod "etcd-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.480359   12642 pod_ready.go:82] duration metric: took 3.921757ms for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480374   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484379   12642 pod_ready.go:93] pod "kube-apiserver-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.484399   12642 pod_ready.go:82] duration metric: took 4.01835ms for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484407   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488483   12642 pod_ready.go:93] pod "kube-controller-manager-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.488502   12642 pod_ready.go:82] duration metric: took 4.089026ms for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488513   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492259   12642 pod_ready.go:93] pod "kube-proxy-7grrw" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.492277   12642 pod_ready.go:82] duration metric: took 3.758267ms for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492286   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.508978   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.509276   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.548257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.875363   12642 pod_ready.go:93] pod "kube-scheduler-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.875387   12642 pod_ready.go:82] duration metric: took 383.093988ms for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.875399   12642 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.907718   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.006857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.047708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.407759   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.506231   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.506532   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.547623   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.908178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.009196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.009613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.111822   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.408212   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.507815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.508955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.597930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.899332   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.907966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.007593   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.007941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.096688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.407803   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.507008   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.507185   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.548820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.912820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.007788   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.007812   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.048263   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.506945   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.507715   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.548866   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.908787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.007032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.007632   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.048796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.398719   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:40.407487   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.507397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.507772   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.548227   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.908344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.009557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.009817   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.048882   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.407443   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.507386   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.507614   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.547783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.907344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.006438   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.006755   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.047817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.407604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.506506   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.506862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.548258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.880576   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:42.907125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.006570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.006955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.048271   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.407864   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.507257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.507492   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.548688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.907268   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.006139   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.006358   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.048808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.408058   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.506983   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.507322   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.548244   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.907777   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.007224   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.007575   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.048360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.381456   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:45.408061   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.507492   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.507642   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.548176   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.907279   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.006236   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.407829   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.507175   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.507613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.549215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.908356   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.007293   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.007559   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.098016   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.398953   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:47.408142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.507848   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.508575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.597783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.907504   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.006545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.047872   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.408467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.506796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.507040   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.548302   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.907911   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.007377   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.007799   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.048150   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.407649   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.506584   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.507145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.548392   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.881772   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:49.907684   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.006877   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.007616   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.048576   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.408384   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.509092   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.509234   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.548191   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.907565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.008280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.008548   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.048447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.407510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.506404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.547570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.900427   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:51.908013   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.008311   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.009178   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.098159   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.407616   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.506895   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.507402   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.548326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.907362   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.008415   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.009033   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.110477   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.408669   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.508937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.509320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.548259   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.907440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.006459   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.047766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.381253   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:54.408025   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.506984   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.548500   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.907545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.007055   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.007267   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.048307   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.407381   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.506329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.506924   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.547861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.907031   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.007920   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.048290   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.407755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.508288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.508534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.547447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.880835   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:56.907604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.008980   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.009246   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.048404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.408337   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.506591   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.506714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.547844   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.907931   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.007018   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.007364   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.048745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.407890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.506768   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.507350   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.548030   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.883327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:58.908144   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.008937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.010047   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.048751   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.407088   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.507067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.507939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.597408   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.907493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.006520   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.006934   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.407658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.507503   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.548304   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.908137   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.007637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.007838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.048049   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.381960   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:01.407780   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.506951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.507128   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.549865   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.908484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.009640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.009714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.047344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.407125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.506639   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.547791   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.908024   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.007189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.007861   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.048215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.408697   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.509655   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.509879   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.547998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.881604   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:03.907142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.006400   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.006547   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.047579   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.407594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.509746   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.510002   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.547819   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.907345   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.006657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.006921   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.048328   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.407535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.506637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.506876   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.548360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.881794   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:05.907547   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.006578   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.007101   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.047920   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.408051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.506012   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.506238   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.548610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.006786   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.007057   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.048484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.407806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.506692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.506986   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.548007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.907772   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.006701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.006970   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.047834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.394559   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:08.408017   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.507156   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.507728   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.597758   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.907919   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.007661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.098454   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.408318   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.509364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.510773   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.598483   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.908201   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.008441   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.009850   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.102292   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.398327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:10.408466   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.507500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.507925   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.548323   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.907708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.006815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.008091   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.047722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.407736   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.507196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.507427   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.599680   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.907752   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.007430   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.007699   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.047776   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.407516   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.506452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.506628   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.550195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.880927   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:12.907727   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.007178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.007457   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.407946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.507322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.507501   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.547784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.908011   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.007871   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.008085   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.049162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.407342   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.506366   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.507489   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.597388   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.881914   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:14.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.007276   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.008484   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.097577   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.407927   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.507867   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.508145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.548701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.909823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.012269   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.012490   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.112080   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.407823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.506640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.507038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.547677   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.908338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.006229   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.006500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.047433   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.380841   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:17.408141   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.507281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.507422   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.548306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.908216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.005946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.006253   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.048471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.407630   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.506857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.507586   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.547722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.908142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.007287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.007657   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.048873   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.399218   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:19.408522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.506838   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.506974   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.548754   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.907508   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.006666   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.007738   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.096885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.407683   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.507079   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.507594   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.549277   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.938821   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.007125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.007361   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.049052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.408461   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.506721   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.507045   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.548148   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.881149   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:21.907701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.007091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.007530   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.108828   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.408067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.507251   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.507505   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.549744   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.908512   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.006557   12642 kapi.go:107] duration metric: took 1m24.503572468s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:23.007211   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.050575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.408216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.507222   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.548029   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.881704   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:23.907636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.006951   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.048091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.407560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.506856   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.548705   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.907750   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.006941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.048097   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.408473   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.507086   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.548651   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.907834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.007469   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.415775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.417875   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:26.507746   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.549493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.908404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.009635   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.048391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.408105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.509068   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.548222   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.908042   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.007883   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.047932   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.408370   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.507379   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.548467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.898654   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:28.907039   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.007310   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.048105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.407790   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.507440   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.598195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.907810   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.007961   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.407748   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.548456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.908206   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.007623   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.048306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.380691   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:31.407719   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.506896   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.547878   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.907840   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.007212   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.048133   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.407238   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.506798   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.548528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.907455   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.006747   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.047570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.381514   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:33.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.506478   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.548374   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.907944   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.007347   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.048784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.408200   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.506244   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.548189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.907539   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.006862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.049282   12642 kapi.go:107] duration metric: took 1m35.505619997s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:25:35.407599   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.881121   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:35.907998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.007303   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.407476   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.506940   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.006647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.408081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.507464   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.908184   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.007201   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.381474   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:38.407986   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.508647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.908946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.008435   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.408471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.510473   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.995610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.008869   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.397632   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:40.408032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.509659   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.907933   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.007031   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.408056   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.508041   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.908287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.006885   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.407440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.880849   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:42.907379   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.008348   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.408661   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.907189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.006692   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.407965   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.507074   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.908416   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.006411   12642 kapi.go:107] duration metric: took 1m46.503572843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:45.381179   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:45.459019   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.907457   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.408510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.907182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.396594   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:47.407631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.908030   12642 kapi.go:107] duration metric: took 1m45.003803312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:47.909696   12642 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-821781 cluster.
	I0916 10:25:47.911374   12642 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:47.913470   12642 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:47.915138   12642 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, helm-tiller, metrics-server, storage-provisioner, cloud-spanner, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:47.916678   12642 addons.go:510] duration metric: took 1m55.100061322s for enable addons: enabled=[ingress-dns nvidia-device-plugin helm-tiller metrics-server storage-provisioner cloud-spanner yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:49.881225   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:52.381442   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:54.380287   12642 pod_ready.go:93] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.380308   12642 pod_ready.go:82] duration metric: took 1m18.504902601s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.380318   12642 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384430   12642 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.384450   12642 pod_ready.go:82] duration metric: took 4.126025ms for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384468   12642 pod_ready.go:39] duration metric: took 1m20.581229133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:25:54.384485   12642 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:25:54.384513   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:54.384564   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:54.417384   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.417411   12642 cri.go:89] found id: ""
	I0916 10:25:54.417421   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:54.417476   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.420785   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:54.420839   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:54.452868   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.452890   12642 cri.go:89] found id: ""
	I0916 10:25:54.452898   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:54.452950   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.456066   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:54.456119   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:54.487907   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:54.487930   12642 cri.go:89] found id: ""
	I0916 10:25:54.487938   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:54.487992   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.491215   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:54.491266   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:54.523745   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.523766   12642 cri.go:89] found id: ""
	I0916 10:25:54.523775   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:54.523831   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.527161   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:54.527229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:54.560095   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.560123   12642 cri.go:89] found id: ""
	I0916 10:25:54.560133   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:54.560180   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.563529   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:54.563589   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:54.596576   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:54.596600   12642 cri.go:89] found id: ""
	I0916 10:25:54.596608   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:54.596655   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.599825   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:54.599906   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:54.632507   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:54.632531   12642 cri.go:89] found id: ""
	I0916 10:25:54.632539   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:54.632620   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.635882   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:54.635906   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:54.698451   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:54.698492   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:54.799766   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:54.799797   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.843933   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:54.843963   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.894142   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:54.894174   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.934257   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:54.934288   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.967135   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:54.967163   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:55.001104   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:55.001133   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:55.013631   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:55.013663   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:55.047469   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:55.047499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:55.106750   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:55.106787   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:55.182277   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:55.182324   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:57.726595   12642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:25:57.740119   12642 api_server.go:72] duration metric: took 2m4.923540882s to wait for apiserver process to appear ...
	I0916 10:25:57.740152   12642 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:25:57.740187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:57.740229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:57.772533   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:57.772558   12642 cri.go:89] found id: ""
	I0916 10:25:57.772566   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:57.772615   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.775778   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:57.775838   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:57.813245   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:57.813271   12642 cri.go:89] found id: ""
	I0916 10:25:57.813281   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:57.813354   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.817691   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:57.817769   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:57.851306   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:57.851328   12642 cri.go:89] found id: ""
	I0916 10:25:57.851335   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:57.851378   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.854640   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:57.854706   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:57.904175   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:57.904198   12642 cri.go:89] found id: ""
	I0916 10:25:57.904205   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:57.904252   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.907938   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:57.907996   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:57.941402   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:57.941421   12642 cri.go:89] found id: ""
	I0916 10:25:57.941428   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:57.941481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.944741   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:57.944796   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:57.979020   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:57.979042   12642 cri.go:89] found id: ""
	I0916 10:25:57.979051   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:57.979108   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.982381   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:57.982431   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:58.014858   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:58.014881   12642 cri.go:89] found id: ""
	I0916 10:25:58.014890   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:58.014937   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:58.018251   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:58.018272   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:58.050812   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:58.050847   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:58.108286   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:58.108318   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:58.182964   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:58.183002   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:58.248089   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:58.248126   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:58.260293   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:58.260339   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:58.355509   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:58.355535   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:58.398314   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:58.398350   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:58.445703   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:58.445736   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:58.485997   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:58.486025   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:58.519971   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:58.519998   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:58.558470   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:58.558499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.092930   12642 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:26:01.096706   12642 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:26:01.097615   12642 api_server.go:141] control plane version: v1.31.1
	I0916 10:26:01.097635   12642 api_server.go:131] duration metric: took 3.357476241s to wait for apiserver health ...
	I0916 10:26:01.097642   12642 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:26:01.097662   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:26:01.097709   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:26:01.131450   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.131477   12642 cri.go:89] found id: ""
	I0916 10:26:01.131489   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:26:01.131542   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.134752   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:26:01.134813   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:26:01.166978   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.167002   12642 cri.go:89] found id: ""
	I0916 10:26:01.167014   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:26:01.167057   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.170770   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:26:01.170821   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:26:01.203544   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.203564   12642 cri.go:89] found id: ""
	I0916 10:26:01.203571   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:26:01.203632   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.207027   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:26:01.207101   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:26:01.240766   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.240787   12642 cri.go:89] found id: ""
	I0916 10:26:01.240795   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:26:01.240847   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.244187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:26:01.244242   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:26:01.278657   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.278686   12642 cri.go:89] found id: ""
	I0916 10:26:01.278696   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:26:01.278754   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.282264   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:26:01.282333   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:26:01.316408   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.316431   12642 cri.go:89] found id: ""
	I0916 10:26:01.316439   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:26:01.316481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.319848   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:26:01.319913   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:26:01.352617   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.352637   12642 cri.go:89] found id: ""
	I0916 10:26:01.352645   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:26:01.352692   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.356052   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:26:01.356078   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:26:01.430171   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:26:01.430203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:26:01.471970   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:26:01.472001   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.512405   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:26:01.512437   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.545482   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:26:01.545511   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:26:01.657458   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:26:01.657495   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.703167   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:26:01.703203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.753488   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:26:01.753528   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.788778   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:26:01.788809   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.847216   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:26:01.847252   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.883444   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:26:01.883479   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:26:01.950602   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:26:01.950637   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:26:04.473621   12642 system_pods.go:59] 19 kube-system pods found
	I0916 10:26:04.473667   12642 system_pods.go:61] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.473674   12642 system_pods.go:61] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.473678   12642 system_pods.go:61] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.473681   12642 system_pods.go:61] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.473685   12642 system_pods.go:61] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.473688   12642 system_pods.go:61] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.473692   12642 system_pods.go:61] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.473696   12642 system_pods.go:61] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.473699   12642 system_pods.go:61] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.473702   12642 system_pods.go:61] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.473706   12642 system_pods.go:61] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.473709   12642 system_pods.go:61] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.473712   12642 system_pods.go:61] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.473715   12642 system_pods.go:61] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.473718   12642 system_pods.go:61] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.473722   12642 system_pods.go:61] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.473725   12642 system_pods.go:61] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.473728   12642 system_pods.go:61] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.473731   12642 system_pods.go:61] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.473737   12642 system_pods.go:74] duration metric: took 3.376089349s to wait for pod list to return data ...
	I0916 10:26:04.473747   12642 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:26:04.476243   12642 default_sa.go:45] found service account: "default"
	I0916 10:26:04.476265   12642 default_sa.go:55] duration metric: took 2.512507ms for default service account to be created ...
	I0916 10:26:04.476273   12642 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:26:04.484719   12642 system_pods.go:86] 19 kube-system pods found
	I0916 10:26:04.484756   12642 system_pods.go:89] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.484762   12642 system_pods.go:89] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.484766   12642 system_pods.go:89] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.484770   12642 system_pods.go:89] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.484774   12642 system_pods.go:89] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.484778   12642 system_pods.go:89] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.484782   12642 system_pods.go:89] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.484786   12642 system_pods.go:89] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.484790   12642 system_pods.go:89] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.484796   12642 system_pods.go:89] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.484800   12642 system_pods.go:89] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.484803   12642 system_pods.go:89] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.484807   12642 system_pods.go:89] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.484812   12642 system_pods.go:89] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.484818   12642 system_pods.go:89] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.484822   12642 system_pods.go:89] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.484826   12642 system_pods.go:89] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.484830   12642 system_pods.go:89] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.484834   12642 system_pods.go:89] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.484840   12642 system_pods.go:126] duration metric: took 8.563189ms to wait for k8s-apps to be running ...
	I0916 10:26:04.484851   12642 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:26:04.484897   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:26:04.496212   12642 system_svc.go:56] duration metric: took 11.351945ms WaitForService to wait for kubelet
	I0916 10:26:04.496239   12642 kubeadm.go:582] duration metric: took 2m11.67966753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:26:04.496261   12642 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:26:04.499350   12642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:26:04.499377   12642 node_conditions.go:123] node cpu capacity is 8
	I0916 10:26:04.499389   12642 node_conditions.go:105] duration metric: took 3.122952ms to run NodePressure ...
	I0916 10:26:04.499400   12642 start.go:241] waiting for startup goroutines ...
	I0916 10:26:04.499406   12642 start.go:246] waiting for cluster config update ...
	I0916 10:26:04.499455   12642 start.go:255] writing updated cluster config ...
	I0916 10:26:04.519561   12642 ssh_runner.go:195] Run: rm -f paused
	I0916 10:26:04.665202   12642 out.go:177] * Done! kubectl is now configured to use "addons-821781" cluster and "default" namespace by default
	E0916 10:26:04.666644   12642 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.210730567Z" level=info msg="Stopped pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=5920ec82-b971-47e8-ab8f-97f10512b921 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.932887074Z" level=info msg="Removing container: 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557" id=7c971f4c-d380-4cd4-ad5a-169db70dfa55 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.946676009Z" level=info msg="Removed container 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=7c971f4c-d380-4cd4-ad5a-169db70dfa55 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.031051552Z" level=info msg="Stopping container: 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f (timeout: 30s)" id=16404828-538d-4914-bc6a-34043446f331 name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:27 addons-821781 conmon[4128]: conmon 960e66cd3823f16f4a22 <ninfo>: container 4140 exited with status 2
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.167308595Z" level=info msg="Stopped container 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f: kube-system/tiller-deploy-b48cc5f79-jcsqv/tiller" id=16404828-538d-4914-bc6a-34043446f331 name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.167853011Z" level=info msg="Stopping pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=75e7f390-73a1-4c31-ae77-e6004ec4617f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.168131464Z" level=info msg="Got pod network &{Name:tiller-deploy-b48cc5f79-jcsqv Namespace:kube-system ID:5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc UID:3177a86a-dac6-4f73-acef-e8b6f8c0aed1 NetNS:/var/run/netns/b92901ae-3e92-487e-94be-09e4b8bf1ba5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.168308000Z" level=info msg="Deleting pod kube-system_tiller-deploy-b48cc5f79-jcsqv from CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.214863134Z" level=info msg="Stopped pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=75e7f390-73a1-4c31-ae77-e6004ec4617f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.985987444Z" level=info msg="Removing container: 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f" id=20c33ef3-37f7-4f43-97d7-23b173848fd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:28 addons-821781 crio[1028]: time="2024-09-16 10:27:28.002745748Z" level=info msg="Removed container 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f: kube-system/tiller-deploy-b48cc5f79-jcsqv/tiller" id=20c33ef3-37f7-4f43-97d7-23b173848fd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.198286631Z" level=info msg="Stopping pod sandbox: 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=3c0fdf6d-b3ae-4175-9d87-3618e8f4f71c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.198348502Z" level=info msg="Stopped pod sandbox (already stopped): 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=3c0fdf6d-b3ae-4175-9d87-3618e8f4f71c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.198642882Z" level=info msg="Removing pod sandbox: 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=23484731-db4a-4bfc-b932-8b78b207f3c5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.205894535Z" level=info msg="Removed pod sandbox: 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=23484731-db4a-4bfc-b932-8b78b207f3c5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.206314240Z" level=info msg="Stopping pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=00c60787-d056-4d82-a5ba-1ba34f5aae8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.206350928Z" level=info msg="Stopped pod sandbox (already stopped): a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=00c60787-d056-4d82-a5ba-1ba34f5aae8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.206580298Z" level=info msg="Removing pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=9f9aa37c-fc7b-4d1a-af38-b4061811956f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.213389226Z" level=info msg="Removed pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=9f9aa37c-fc7b-4d1a-af38-b4061811956f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.213824980Z" level=info msg="Stopping pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=ed2f42f9-aa3a-40c9-8df8-926cdbd385ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.213871523Z" level=info msg="Stopped pod sandbox (already stopped): 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=ed2f42f9-aa3a-40c9-8df8-926cdbd385ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.214166373Z" level=info msg="Removing pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=2da8c7e3-71a8-4291-bcf0-102db2d873de name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.221101717Z" level=info msg="Removed pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=2da8c7e3-71a8-4291-bcf0-102db2d873de name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:31:28 addons-821781 crio[1028]: time="2024-09-16 10:31:28.157520629Z" level=info msg="Stopping container: 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302 (timeout: 30s)" id=7de2b76d-f0b1-40f7-87e6-4a8075cdda9b name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0dbc187486a77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 5 minutes ago       Running             gcp-auth                                 0                   754882dcda596       gcp-auth-89d5ffd79-b6kzx
	3603c45c1e4ab       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             5 minutes ago       Running             controller                               0                   31855714f04d8       ingress-nginx-controller-bc57996ff-8jlsc
	b6501ff69088d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          5 minutes ago       Running             csi-snapshotter                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	85a5122ba30eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          5 minutes ago       Running             csi-provisioner                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	33527f5387a55       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            5 minutes ago       Running             liveness-probe                           0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	2b3dcba2a09e7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           5 minutes ago       Running             hostpath                                 0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ea5a7e7486ae3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	5247d23b3a397       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   5faba155231dd       snapshot-controller-56fcc65765-tdxm7
	68547a0643ba6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   4cb61d4296010       csi-hostpath-resizer-0
	a2eec9453e9d3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   205f02ffaeb65       csi-hostpath-attacher-0
	d3033819602e2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ffffb6d23a520       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   6 minutes ago       Exited              patch                                    0                   0defdefc8e690       ingress-nginx-admission-patch-22v56
	adcb6aad69051       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   b44ff8bf56a7c       snapshot-controller-56fcc65765-b752p
	d7c74998aab32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   6 minutes ago       Exited              create                                   0                   92efe213e3cc9       ingress-nginx-admission-create-dgb9n
	318be751079db       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             6 minutes ago       Running             local-path-provisioner                   0                   cdfaa5befff59       local-path-provisioner-86d989889c-6xhgj
	2a650198714d3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        6 minutes ago       Exited              metrics-server                           0                   a92ded8c2c84e       metrics-server-84c5f94fbc-t6sfx
	9db25418c7b36       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             6 minutes ago       Running             minikube-ingress-dns                     0                   0a160d796662b       kube-ingress-dns-minikube
	fd1c0fa2e8742       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             6 minutes ago       Running             storage-provisioner                      0                   578052293e511       storage-provisioner
	5fc078f948938       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             6 minutes ago       Running             coredns                                  0                   dd25c29f2c98b       coredns-7c65d6cfc9-f6b44
	8953bd3ac9bbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             7 minutes ago       Running             kube-proxy                               0                   31612ec902e41       kube-proxy-7grrw
	e3e02e9338f21       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             7 minutes ago       Running             kindnet-cni                              0                   efca226e04346       kindnet-2bwl4
	f7c9dd60c650e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             7 minutes ago       Running             kube-apiserver                           0                   325d1d3961d30       kube-apiserver-addons-821781
	aef3299386ef0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             7 minutes ago       Running             etcd                                     0                   5db6677261478       etcd-addons-821781
	23817b3f6401e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             7 minutes ago       Running             kube-scheduler                           0                   192ccdf49d648       kube-scheduler-addons-821781
	319dfee9ab334       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             7 minutes ago       Running             kube-controller-manager                  0                   471807181e888       kube-controller-manager-addons-821781
	
	
	==> coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] <==
	[INFO] 10.244.0.11:54433 - 5196 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117872s
	[INFO] 10.244.0.11:55203 - 39009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079023s
	[INFO] 10.244.0.11:55203 - 18278 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066179s
	[INFO] 10.244.0.11:53992 - 3361 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005725192s
	[INFO] 10.244.0.11:53992 - 5182 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005902528s
	[INFO] 10.244.0.11:58640 - 39752 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005962306s
	[INFO] 10.244.0.11:58640 - 45636 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007442692s
	[INFO] 10.244.0.11:58081 - 46876 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004814518s
	[INFO] 10.244.0.11:58081 - 7960 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005069952s
	[INFO] 10.244.0.11:56786 - 21825 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084442s
	[INFO] 10.244.0.11:56786 - 8540 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121405s
	[INFO] 10.244.0.21:49162 - 58748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183854s
	[INFO] 10.244.0.21:60540 - 21143 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264439s
	[INFO] 10.244.0.21:57612 - 22108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123843s
	[INFO] 10.244.0.21:56370 - 29690 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174744s
	[INFO] 10.244.0.21:53939 - 42345 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115165s
	[INFO] 10.244.0.21:54191 - 30184 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102696s
	[INFO] 10.244.0.21:43721 - 49242 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007714914s
	[INFO] 10.244.0.21:58502 - 61297 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008280312s
	[INFO] 10.244.0.21:45585 - 36043 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008154564s
	[INFO] 10.244.0.21:50514 - 10749 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008661461s
	[INFO] 10.244.0.21:41083 - 31758 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006832696s
	[INFO] 10.244.0.21:53762 - 8306 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007439813s
	[INFO] 10.244.0.21:37796 - 13809 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002178233s
	[INFO] 10.244.0.21:36516 - 28559 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002337896s
	
	
	==> describe nodes <==
	Name:               addons-821781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-821781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-821781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-821781
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-821781"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-821781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:31:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-821781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a93a1abfd8e74fb89ecb0b25fd80b840
	  System UUID:                c474d608-aa29-4551-b357-d17e9479a01d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-b6kzx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8jlsc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m31s
	  kube-system                 coredns-7c65d6cfc9-f6b44                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m37s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 csi-hostpathplugin-pwtwp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 etcd-addons-821781                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m42s
	  kube-system                 kindnet-2bwl4                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m37s
	  kube-system                 kube-apiserver-addons-821781                250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 kube-controller-manager-addons-821781       200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-proxy-7grrw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-scheduler-addons-821781                100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 snapshot-controller-56fcc65765-b752p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 snapshot-controller-56fcc65765-tdxm7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  local-path-storage          local-path-provisioner-86d989889c-6xhgj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m35s  kube-proxy       
	  Normal   Starting                 7m42s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m42s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m42s  kubelet          Node addons-821781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m42s  kubelet          Node addons-821781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m42s  kubelet          Node addons-821781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m38s  node-controller  Node addons-821781 event: Registered Node addons-821781 in Controller
	  Normal   NodeReady                6m56s  kubelet          Node addons-821781 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] <==
	{"level":"warn","ts":"2024-09-16T10:24:33.965134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.284694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-09-16T10:24:33.965140Z","caller":"traceutil/trace.go:171","msg":"trace[589393049] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.482158ms","start":"2024-09-16T10:24:33.834652Z","end":"2024-09-16T10:24:33.965134Z","steps":["trace[589393049] 'agreement among raft nodes before linearized reading'  (duration: 130.392783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.112983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs\" ","response":"range_response_count:1 size:560"}
	{"level":"warn","ts":"2024-09-16T10:24:33.965172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.412831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/default\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964790Z","caller":"traceutil/trace.go:171","msg":"trace[1719481168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:1; response_revision:871; }","duration":"130.308398ms","start":"2024-09-16T10:24:33.834475Z","end":"2024-09-16T10:24:33.964784Z","steps":["trace[1719481168] 'agreement among raft nodes before linearized reading'  (duration: 130.231604ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965031Z","caller":"traceutil/trace.go:171","msg":"trace[1439753586] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:871; }","duration":"130.351105ms","start":"2024-09-16T10:24:33.834675Z","end":"2024-09-16T10:24:33.965026Z","steps":["trace[1439753586] 'agreement among raft nodes before linearized reading'  (duration: 130.285964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.622694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
	{"level":"info","ts":"2024-09-16T10:24:33.965260Z","caller":"traceutil/trace.go:171","msg":"trace[3301844] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.644948ms","start":"2024-09-16T10:24:33.834605Z","end":"2024-09-16T10:24:33.965250Z","steps":["trace[3301844] 'agreement among raft nodes before linearized reading'  (duration: 130.58562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.745393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:1 size:878"}
	{"level":"info","ts":"2024-09-16T10:24:33.965091Z","caller":"traceutil/trace.go:171","msg":"trace[630312888] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.242708ms","start":"2024-09-16T10:24:33.834842Z","end":"2024-09-16T10:24:33.965085Z","steps":["trace[630312888] 'agreement among raft nodes before linearized reading'  (duration: 130.2013ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965306Z","caller":"traceutil/trace.go:171","msg":"trace[687212945] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:1; response_revision:871; }","duration":"130.768911ms","start":"2024-09-16T10:24:33.834532Z","end":"2024-09-16T10:24:33.965301Z","steps":["trace[687212945] 'agreement among raft nodes before linearized reading'  (duration: 130.728326ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965159Z","caller":"traceutil/trace.go:171","msg":"trace[1851867066] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:871; }","duration":"130.30942ms","start":"2024-09-16T10:24:33.834844Z","end":"2024-09-16T10:24:33.965154Z","steps":["trace[1851867066] 'agreement among raft nodes before linearized reading'  (duration: 130.267065ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965180Z","caller":"traceutil/trace.go:171","msg":"trace[395277833] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.138451ms","start":"2024-09-16T10:24:33.835036Z","end":"2024-09-16T10:24:33.965175Z","steps":["trace[395277833] 'agreement among raft nodes before linearized reading'  (duration: 130.084008ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.964761Z","caller":"traceutil/trace.go:171","msg":"trace[1846466404] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.050288ms","start":"2024-09-16T10:24:33.834699Z","end":"2024-09-16T10:24:33.964750Z","steps":["trace[1846466404] 'agreement among raft nodes before linearized reading'  (duration: 129.823354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.867331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964791Z","caller":"traceutil/trace.go:171","msg":"trace[1570104672] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"101.79293ms","start":"2024-09-16T10:24:33.862992Z","end":"2024-09-16T10:24:33.964785Z","steps":["trace[1570104672] 'agreement among raft nodes before linearized reading'  (duration: 101.763738ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965421Z","caller":"traceutil/trace.go:171","msg":"trace[1827982125] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:871; }","duration":"130.890995ms","start":"2024-09-16T10:24:33.834525Z","end":"2024-09-16T10:24:33.965416Z","steps":["trace[1827982125] 'agreement among raft nodes before linearized reading'  (duration: 130.852764ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965209Z","caller":"traceutil/trace.go:171","msg":"trace[945447364] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.449227ms","start":"2024-09-16T10:24:33.834754Z","end":"2024-09-16T10:24:33.965203Z","steps":["trace[945447364] 'agreement among raft nodes before linearized reading'  (duration: 130.396497ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.001003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-16T10:24:33.965579Z","caller":"traceutil/trace.go:171","msg":"trace[1490541276] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:871; }","duration":"131.063942ms","start":"2024-09-16T10:24:33.834502Z","end":"2024-09-16T10:24:33.965566Z","steps":["trace[1490541276] 'agreement among raft nodes before linearized reading'  (duration: 130.98224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.964852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.18611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/snapshot-controller\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2024-09-16T10:24:33.965093Z","caller":"traceutil/trace.go:171","msg":"trace[1524858032] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"129.821011ms","start":"2024-09-16T10:24:33.835267Z","end":"2024-09-16T10:24:33.965088Z","steps":["trace[1524858032] 'agreement among raft nodes before linearized reading'  (duration: 129.760392ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965632Z","caller":"traceutil/trace.go:171","msg":"trace[945136232] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/snapshot-controller; range_end:; response_count:1; response_revision:871; }","duration":"129.963575ms","start":"2024-09-16T10:24:33.835661Z","end":"2024-09-16T10:24:33.965624Z","steps":["trace[945136232] 'agreement among raft nodes before linearized reading'  (duration: 129.14136ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:26.413976Z","caller":"traceutil/trace.go:171","msg":"trace[182413184] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"129.574416ms","start":"2024-09-16T10:25:26.284376Z","end":"2024-09-16T10:25:26.413950Z","steps":["trace[182413184] 'process raft request'  (duration: 67.733345ms)","trace[182413184] 'compare'  (duration: 61.701552ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:48.300626Z","caller":"traceutil/trace.go:171","msg":"trace[869038067] transaction","detail":"{read_only:false; response_revision:1265; number_of_response:1; }","duration":"110.748846ms","start":"2024-09-16T10:25:48.189856Z","end":"2024-09-16T10:25:48.300605Z","steps":["trace[869038067] 'process raft request'  (duration: 107.391476ms)"],"step_count":1}
	
	
	==> gcp-auth [0dbc187486a77d691a5db4775360d83cdf6dd7084d4c3bd9123b7e051fd6bd74] <==
	2024/09/16 10:25:47 GCP Auth Webhook started!
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	
	
	==> kernel <==
	 10:31:29 up 13 min,  0 users,  load average: 0.12, 0.39, 0.28
	Linux addons-821781 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] <==
	I0916 10:29:23.305476       1 main.go:299] handling current node
	I0916 10:29:33.304126       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:29:33.304169       1 main.go:299] handling current node
	I0916 10:29:43.305563       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:29:43.305595       1 main.go:299] handling current node
	I0916 10:29:53.299430       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:29:53.299477       1 main.go:299] handling current node
	I0916 10:30:03.304990       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:03.305028       1 main.go:299] handling current node
	I0916 10:30:13.305464       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:13.305594       1 main.go:299] handling current node
	I0916 10:30:23.305490       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:23.305568       1 main.go:299] handling current node
	I0916 10:30:33.304728       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:33.304762       1 main.go:299] handling current node
	I0916 10:30:43.305391       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:43.305423       1 main.go:299] handling current node
	I0916 10:30:53.298935       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:53.298976       1 main.go:299] handling current node
	I0916 10:31:03.301461       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:03.301497       1 main.go:299] handling current node
	I0916 10:31:13.305439       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:13.305475       1 main.go:299] handling current node
	I0916 10:31:23.305425       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:23.305467       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] <==
	W0916 10:24:33.565907       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	W0916 10:24:33.565951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.565953       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	E0916 10:24:33.565979       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:33.599472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.599513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:58.720213       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 10:24:58.720232       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:58.720259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 10:24:58.720301       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:24:58.721354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 10:24:58.721362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:25:54.202103       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:25:54.202136       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.74.143:443: connect: connection refused" logger="UnhandledError"
	E0916 10:25:54.202195       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:25:54.215066       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:26:47.647164       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:26:48.662402       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:26:53.534738       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.40.159"}
	
	
	==> kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] <==
	W0916 10:26:55.044011       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:26:55.044048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:26:57.755257       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:26:57.926605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="51.47µs"
	I0916 10:26:57.939305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.337707ms"
	I0916 10:26:57.939375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="37.082µs"
	I0916 10:27:04.034685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="8.781µs"
	W0916 10:27:04.365551       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:04.365591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:27:14.151507       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0916 10:27:21.385941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-821781"
	I0916 10:27:27.020674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="7.724µs"
	W0916 10:27:28.351938       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:28.351975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:28:01.562193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:28:01.562231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:28:51.468704       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:28:51.468745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:29:33.531148       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:29:33.531188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:30:17.809574       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:17.809622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:31:12.836852       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:31:12.836900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:31:28.145289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="5.298µs"
	
	
	==> kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] <==
	I0916 10:23:52.638596       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:52.921753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:52.922374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:53.313675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:53.319718       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:53.497957       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:53.508623       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:53.508659       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:53.510794       1 config.go:199] "Starting service config controller"
	I0916 10:23:53.510833       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:53.510868       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:53.510874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:53.511480       1 config.go:328] "Starting node config controller"
	I0916 10:23:53.511491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:53.617474       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:53.617556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:23:53.711794       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] <==
	W0916 10:23:44.897301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0916 10:23:44.897124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:44.898296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:44.897140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:44.898337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:44.898344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:45.722888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:45.722927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.731239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.731280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.734491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:23:45.734527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.741804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.741845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.771121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:45.771158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.886831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.886867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.913242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.913290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:46.023935       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:46.023972       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:23:48.220429       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:30:17 addons-821781 kubelet[1623]: E0916 10:30:17.268685    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482617268457169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:27 addons-821781 kubelet[1623]: E0916 10:30:27.271317    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482627271095480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:27 addons-821781 kubelet[1623]: E0916 10:30:27.271357    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482627271095480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:37 addons-821781 kubelet[1623]: E0916 10:30:37.273748    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482637273451761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:37 addons-821781 kubelet[1623]: E0916 10:30:37.273795    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482637273451761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:47 addons-821781 kubelet[1623]: E0916 10:30:47.275892    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482647275644601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:47 addons-821781 kubelet[1623]: E0916 10:30:47.275926    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482647275644601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:57 addons-821781 kubelet[1623]: E0916 10:30:57.277868    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482657277605725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:57 addons-821781 kubelet[1623]: E0916 10:30:57.277907    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482657277605725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:07 addons-821781 kubelet[1623]: E0916 10:31:07.280051    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482667279785517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:07 addons-821781 kubelet[1623]: E0916 10:31:07.280087    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482667279785517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:17 addons-821781 kubelet[1623]: E0916 10:31:17.283113    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482677282874049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:17 addons-821781 kubelet[1623]: E0916 10:31:17.283145    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482677282874049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:27 addons-821781 kubelet[1623]: E0916 10:31:27.285622    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482687285376468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:27 addons-821781 kubelet[1623]: E0916 10:31:27.285654    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482687285376468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.402891    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pcs7\" (UniqueName: \"kubernetes.io/projected/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-kube-api-access-5pcs7\") pod \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\" (UID: \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\") "
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.402951    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-tmp-dir\") pod \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\" (UID: \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\") "
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.403329    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "82f2a6b8-aafa-4f82-a707-d4bdaedd415d" (UID: "82f2a6b8-aafa-4f82-a707-d4bdaedd415d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.404946    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-kube-api-access-5pcs7" (OuterVolumeSpecName: "kube-api-access-5pcs7") pod "82f2a6b8-aafa-4f82-a707-d4bdaedd415d" (UID: "82f2a6b8-aafa-4f82-a707-d4bdaedd415d"). InnerVolumeSpecName "kube-api-access-5pcs7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.503499    1623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5pcs7\" (UniqueName: \"kubernetes.io/projected/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-kube-api-access-5pcs7\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.503539    1623 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-tmp-dir\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.515541    1623 scope.go:117] "RemoveContainer" containerID="2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.533377    1623 scope.go:117] "RemoveContainer" containerID="2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: E0916 10:31:29.533950    1623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302\": container with ID starting with 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302 not found: ID does not exist" containerID="2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.533994    1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"} err="failed to get container status \"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302\": rpc error: code = NotFound desc = could not find container \"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302\": container with ID starting with 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302 not found: ID does not exist"
	
	
	==> storage-provisioner [fd1c0fa2e8742125904216a45b6d84f9b367888422cb6083d3e482fd77452994] <==
	I0916 10:24:34.797513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:34.805288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:34.805397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:34.813404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:34.813588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	I0916 10:24:34.814304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d6ca95d-581a-4537-b803-ac9e02f43ec1", APIVersion:"v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4 became leader
	I0916 10:24:34.914571       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-821781 -n addons-821781
helpers_test.go:261: (dbg) Run:  kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (426.12µs)
helpers_test.go:263: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/MetricsServer (323.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (82.64s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 7.904253ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003814657s
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (346.866µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (409.098µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (493.493µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (391.214µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (455.807µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (454.862µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (424.217µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (385.345µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (432.421µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (379.415µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (455.192µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-821781 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (460.21µs)
addons_test.go:489: failed checking helm tiller: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 addons disable helm-tiller --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/HelmTiller]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-821781
helpers_test.go:235: (dbg) docker inspect addons-821781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9",
	        "Created": "2024-09-16T10:23:34.422231958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:34.564816551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hosts",
	        "LogPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9-json.log",
	        "Name": "/addons-821781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-821781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-821781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-821781",
	                "Source": "/var/lib/docker/volumes/addons-821781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-821781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-821781",
	                "name.minikube.sigs.k8s.io": "addons-821781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb89cb54fc4711f104a02c8d2ebaaa0dae68769e21054477c7dd719ee876c61d",
	            "SandboxKey": "/var/run/docker/netns/cb89cb54fc47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-821781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "66d8d4a2fe0f9ff012a57288f3992a27df27bc2a73eb33a40ff3adbc0fa270ea",
	                    "EndpointID": "54da588c62c62ca60fdaac7dbe299e76b7fad63e791a3bfc770a096d3640b2fb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-821781",
	                        "60dd933522c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-821781 -n addons-821781
helpers_test.go:244: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-821781 logs -n 25: (1.404121149s)
helpers_test.go:252: TestAddons/parallel/HelmTiller logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-534059              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-920673              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-291625               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-291625            | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-597115                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44611               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-597115              | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | disable dashboard -p                 | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| start   | -p addons-821781 --wait=true         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:26 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| ip      | addons-821781 ip                     | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:11.785613   12642 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:11.786005   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786020   12642 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:11.786026   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786201   12642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:23:11.786846   12642 out.go:352] Setting JSON to false
	I0916 10:23:11.787652   12642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":332,"bootTime":1726481860,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:11.787744   12642 start.go:139] virtualization: kvm guest
	I0916 10:23:11.789971   12642 out.go:177] * [addons-821781] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:11.791581   12642 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:11.791602   12642 notify.go:220] Checking for updates...
	I0916 10:23:11.793279   12642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:11.794876   12642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:11.796234   12642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:23:11.797605   12642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:11.798881   12642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:11.800381   12642 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:11.822354   12642 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:11.822435   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.875294   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.865218731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.875392   12642 docker.go:318] overlay module found
	I0916 10:23:11.877179   12642 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:11.878539   12642 start.go:297] selected driver: docker
	I0916 10:23:11.878555   12642 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:11.878567   12642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:11.879376   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.928080   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.918595521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.928248   12642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:11.928460   12642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:11.930314   12642 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:11.931824   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:11.931880   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:11.931896   12642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:11.931970   12642 start.go:340] cluster config:
	{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:11.933478   12642 out.go:177] * Starting "addons-821781" primary control-plane node in "addons-821781" cluster
	I0916 10:23:11.934979   12642 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:23:11.936645   12642 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:11.938033   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:11.938077   12642 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:23:11.938086   12642 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:11.938151   12642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:11.938181   12642 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:11.938195   12642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:23:11.938528   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:11.938559   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json: {Name:mkb2d65543ac9e0f1211fb3bb619eaf59705ab34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:11.954455   12642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:11.954550   12642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:11.954565   12642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:11.954570   12642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:11.954578   12642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:11.954585   12642 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:24.468174   12642 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:24.468219   12642 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:24.468270   12642 start.go:360] acquireMachinesLock for addons-821781: {Name:mk2b69b21902e1a037d888f1a4c14b20c068c000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:24.468392   12642 start.go:364] duration metric: took 101µs to acquireMachinesLock for "addons-821781"
	I0916 10:23:24.468422   12642 start.go:93] Provisioning new machine with config: &{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:24.468511   12642 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:24.470800   12642 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:24.471033   12642 start.go:159] libmachine.API.Create for "addons-821781" (driver="docker")
	I0916 10:23:24.471057   12642 client.go:168] LocalClient.Create starting
	I0916 10:23:24.471161   12642 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:23:24.563569   12642 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:23:24.843226   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:24.859906   12642 cli_runner.go:211] docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:24.859982   12642 network_create.go:284] running [docker network inspect addons-821781] to gather additional debugging logs...
	I0916 10:23:24.860006   12642 cli_runner.go:164] Run: docker network inspect addons-821781
	W0916 10:23:24.875695   12642 cli_runner.go:211] docker network inspect addons-821781 returned with exit code 1
	I0916 10:23:24.875725   12642 network_create.go:287] error running [docker network inspect addons-821781]: docker network inspect addons-821781: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-821781 not found
	I0916 10:23:24.875736   12642 network_create.go:289] output of [docker network inspect addons-821781]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-821781 not found
	
	** /stderr **
	I0916 10:23:24.875825   12642 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:24.892396   12642 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019c5ea0}
	I0916 10:23:24.892450   12642 network_create.go:124] attempt to create docker network addons-821781 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:24.892494   12642 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-821781 addons-821781
	I0916 10:23:24.956362   12642 network_create.go:108] docker network addons-821781 192.168.49.0/24 created
	I0916 10:23:24.956397   12642 kic.go:121] calculated static IP "192.168.49.2" for the "addons-821781" container
	I0916 10:23:24.956461   12642 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:24.972596   12642 cli_runner.go:164] Run: docker volume create addons-821781 --label name.minikube.sigs.k8s.io=addons-821781 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:24.991422   12642 oci.go:103] Successfully created a docker volume addons-821781
	I0916 10:23:24.991492   12642 cli_runner.go:164] Run: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:29.942508   12642 cli_runner.go:217] Completed: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.950978249s)
	I0916 10:23:29.942530   12642 oci.go:107] Successfully prepared a docker volume addons-821781
	I0916 10:23:29.942541   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:29.942558   12642 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:29.942601   12642 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:34.358289   12642 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.415644078s)
	I0916 10:23:34.358318   12642 kic.go:203] duration metric: took 4.415757339s to extract preloaded images to volume ...
	W0916 10:23:34.358449   12642 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:34.358539   12642 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:34.407126   12642 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-821781 --name addons-821781 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-821781 --network addons-821781 --ip 192.168.49.2 --volume addons-821781:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:34.740907   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Running}}
	I0916 10:23:34.761456   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:34.779743   12642 cli_runner.go:164] Run: docker exec addons-821781 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:34.825817   12642 oci.go:144] the created container "addons-821781" has a running status.
	I0916 10:23:34.825843   12642 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa...
	I0916 10:23:35.044132   12642 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:35.071224   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.090107   12642 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:35.090127   12642 kic_runner.go:114] Args: [docker exec --privileged addons-821781 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:35.145473   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.163175   12642 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:35.163257   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.181284   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.181510   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.181525   12642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:35.376812   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.376844   12642 ubuntu.go:169] provisioning hostname "addons-821781"
	I0916 10:23:35.376907   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.394400   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.394569   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.394582   12642 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-821781 && echo "addons-821781" | sudo tee /etc/hostname
	I0916 10:23:35.535760   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.535841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.554208   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.554394   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.554410   12642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-821781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-821781/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-821781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:35.685491   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:35.685520   12642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:23:35.685538   12642 ubuntu.go:177] setting up certificates
	I0916 10:23:35.685549   12642 provision.go:84] configureAuth start
	I0916 10:23:35.685599   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:35.701932   12642 provision.go:143] copyHostCerts
	I0916 10:23:35.702012   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:23:35.702151   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:23:35.702230   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:23:35.702295   12642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.addons-821781 san=[127.0.0.1 192.168.49.2 addons-821781 localhost minikube]
	I0916 10:23:35.783034   12642 provision.go:177] copyRemoteCerts
	I0916 10:23:35.783097   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:35.783127   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.800161   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:35.893913   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:23:35.915296   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:23:35.937405   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:35.959050   12642 provision.go:87] duration metric: took 273.490922ms to configureAuth
	I0916 10:23:35.959082   12642 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:35.959246   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:35.959337   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.977055   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.977247   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.977264   12642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:23:36.194829   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:23:36.194851   12642 machine.go:96] duration metric: took 1.031655385s to provisionDockerMachine
	I0916 10:23:36.194860   12642 client.go:171] duration metric: took 11.723797841s to LocalClient.Create
	I0916 10:23:36.194875   12642 start.go:167] duration metric: took 11.723845183s to libmachine.API.Create "addons-821781"
	I0916 10:23:36.194883   12642 start.go:293] postStartSetup for "addons-821781" (driver="docker")
	I0916 10:23:36.194895   12642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:36.194953   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:36.194987   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.212136   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.306296   12642 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:36.309608   12642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:36.309638   12642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:36.309646   12642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:36.309652   12642 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:36.309662   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:23:36.309721   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:23:36.309744   12642 start.go:296] duration metric: took 114.855265ms for postStartSetup
	I0916 10:23:36.310017   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.326531   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:36.326849   12642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:36.326901   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.343127   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.434151   12642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:36.438063   12642 start.go:128] duration metric: took 11.969538805s to createHost
	I0916 10:23:36.438087   12642 start.go:83] releasing machines lock for "addons-821781", held for 11.96968194s
	I0916 10:23:36.438170   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.454099   12642 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:36.454144   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.454204   12642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:36.454276   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.472027   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.473599   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.640610   12642 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:36.644626   12642 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:23:36.780722   12642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:36.785109   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.802933   12642 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:36.803016   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.830084   12642 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:36.830106   12642 start.go:495] detecting cgroup driver to use...
	I0916 10:23:36.830135   12642 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:36.830178   12642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:23:36.843678   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:23:36.854207   12642 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:36.854255   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:36.867323   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:36.880430   12642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:36.955777   12642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:37.035979   12642 docker.go:233] disabling docker service ...
	I0916 10:23:37.036049   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:37.052780   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:37.063200   12642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:37.138165   12642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:37.215004   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:37.225051   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:37.239114   12642 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:23:37.239176   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.248375   12642 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:23:37.248431   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.257180   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.265957   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.274955   12642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:37.283271   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.291833   12642 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.305478   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.314242   12642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:37.321530   12642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:37.328860   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.397743   12642 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:23:37.494696   12642 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:23:37.494784   12642 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:23:37.498069   12642 start.go:563] Will wait 60s for crictl version
	I0916 10:23:37.498121   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:23:37.501763   12642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:37.533845   12642 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:23:37.533971   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.568210   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.602768   12642 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:23:37.604266   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:37.620164   12642 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:37.623594   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.633351   12642 kubeadm.go:883] updating cluster {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:37.633481   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:37.633537   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.691488   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.691513   12642 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:23:37.691557   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.721834   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.721855   12642 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:37.721863   12642 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:23:37.721943   12642 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-821781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:37.722004   12642 ssh_runner.go:195] Run: crio config
	I0916 10:23:37.761799   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:37.761826   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:37.761837   12642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:37.761858   12642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-821781 NodeName:addons-821781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:37.761998   12642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-821781"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:37.762053   12642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:37.770243   12642 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:37.770305   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:37.778774   12642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:23:37.794482   12642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:37.810783   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0916 10:23:37.827097   12642 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:37.830351   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.840395   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.914798   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:37.926573   12642 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781 for IP: 192.168.49.2
	I0916 10:23:37.926602   12642 certs.go:194] generating shared ca certs ...
	I0916 10:23:37.926624   12642 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:37.926767   12642 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:23:38.165524   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt ...
	I0916 10:23:38.165552   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt: {Name:mk958b9d7b4e596cca12a43812b033701a1808ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165715   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key ...
	I0916 10:23:38.165727   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key: {Name:mk218c15b5e68b365653a5a88f283b4fd2a63397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165796   12642 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:23:38.317748   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt ...
	I0916 10:23:38.317782   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt: {Name:mke289e24f4d60c196cc49c14787f9db71cc62b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.317972   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key ...
	I0916 10:23:38.317984   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key: {Name:mk238a3132478eab5de811cbc3626e41ad1154f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.318059   12642 certs.go:256] generating profile certs ...
	I0916 10:23:38.318110   12642 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key
	I0916 10:23:38.318136   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt with IP's: []
	I0916 10:23:38.579861   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt ...
	I0916 10:23:38.579894   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: {Name:mk21e84efd5822ab69a95d39a845706a794c0061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580087   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key ...
	I0916 10:23:38.580102   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key: {Name:mkafbaeecfaf57db916f1469c60f36a7c0603c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580202   12642 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e
	I0916 10:23:38.580226   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:38.661523   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e ...
	I0916 10:23:38.661551   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e: {Name:mk3603fd200d1d0c9c664f1f9e2d3f37d0da819e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661721   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e ...
	I0916 10:23:38.661734   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e: {Name:mk979e39754dc7623208af4e4f8346a3268b5e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661802   12642 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt
	I0916 10:23:38.661872   12642 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key
	I0916 10:23:38.661916   12642 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key
	I0916 10:23:38.661934   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt with IP's: []
	I0916 10:23:38.868848   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt ...
	I0916 10:23:38.868882   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt: {Name:mk60143e6be001872095f4a07cc8800f3883cb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869061   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key ...
	I0916 10:23:38.869072   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key: {Name:mkfcb902307b78d6d49e6123539922887bdc7bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869254   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:38.869291   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:23:38.869321   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:38.869365   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:38.869947   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:38.891875   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:38.913044   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:38.935301   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:38.957638   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:38.978769   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:38.999283   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:39.020509   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:39.041006   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:39.062022   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:39.077689   12642 ssh_runner.go:195] Run: openssl version
	I0916 10:23:39.082828   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:39.091794   12642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094851   12642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094909   12642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.101357   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:39.110237   12642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:39.113275   12642 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:39.113343   12642 kubeadm.go:392] StartCluster: {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:39.113424   12642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:39.113461   12642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:39.147213   12642 cri.go:89] found id: ""
	I0916 10:23:39.147277   12642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:39.155102   12642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:39.162655   12642 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:39.162713   12642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:39.170269   12642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:39.170287   12642 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:39.170331   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:39.177944   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:39.178006   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:39.185617   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:39.193448   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:39.193494   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:39.201778   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.209504   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:39.209560   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.217167   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:39.224794   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:39.224851   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:39.232091   12642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:39.267943   12642 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:39.268041   12642 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:39.285854   12642 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:39.285924   12642 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:39.285968   12642 kubeadm.go:310] OS: Linux
	I0916 10:23:39.286011   12642 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:39.286080   12642 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:39.286143   12642 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:39.286205   12642 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:39.286307   12642 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:39.286389   12642 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:39.286430   12642 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:39.286498   12642 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:39.286566   12642 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:39.334020   12642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:39.334137   12642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:39.334277   12642 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:39.339811   12642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:39.342965   12642 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:39.343081   12642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:39.343174   12642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:39.501471   12642 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:39.656891   12642 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:39.803369   12642 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:39.956554   12642 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:40.122217   12642 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:40.122346   12642 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.178788   12642 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:40.178946   12642 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.253274   12642 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:40.444072   12642 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:40.539814   12642 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:40.539908   12642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:40.740107   12642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:40.805609   12642 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:41.114974   12642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:41.183175   12642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:41.287722   12642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:41.288131   12642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:41.290675   12642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:41.293432   12642 out.go:235]   - Booting up control plane ...
	I0916 10:23:41.293554   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:41.293636   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:41.293726   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:41.302536   12642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:41.307914   12642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:41.307975   12642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:41.387469   12642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:41.387659   12642 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:41.889098   12642 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.704632ms
	I0916 10:23:41.889216   12642 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:46.391264   12642 kubeadm.go:310] [api-check] The API server is healthy after 4.502175176s
	I0916 10:23:46.402989   12642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:46.412298   12642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:46.429664   12642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:46.429953   12642 kubeadm.go:310] [mark-control-plane] Marking the node addons-821781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:46.439045   12642 kubeadm.go:310] [bootstrap-token] Using token: 08e8kf.82j5psgo1mt86ygt
	I0916 10:23:46.440988   12642 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:46.441118   12642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:46.443591   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:46.448741   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:46.451033   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:46.453482   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:46.457052   12642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:46.798062   12642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:47.220263   12642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:47.797780   12642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:47.798623   12642 kubeadm.go:310] 
	I0916 10:23:47.798710   12642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:47.798722   12642 kubeadm.go:310] 
	I0916 10:23:47.798838   12642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:47.798858   12642 kubeadm.go:310] 
	I0916 10:23:47.798897   12642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:47.798955   12642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:47.799030   12642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:47.799050   12642 kubeadm.go:310] 
	I0916 10:23:47.799117   12642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:47.799125   12642 kubeadm.go:310] 
	I0916 10:23:47.799191   12642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:47.799202   12642 kubeadm.go:310] 
	I0916 10:23:47.799273   12642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:47.799371   12642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:47.799433   12642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:47.799458   12642 kubeadm.go:310] 
	I0916 10:23:47.799618   12642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:47.799702   12642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:47.799727   12642 kubeadm.go:310] 
	I0916 10:23:47.799855   12642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800005   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:23:47.800028   12642 kubeadm.go:310] 	--control-plane 
	I0916 10:23:47.800034   12642 kubeadm.go:310] 
	I0916 10:23:47.800137   12642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:47.800147   12642 kubeadm.go:310] 
	I0916 10:23:47.800244   12642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800384   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:23:47.802505   12642 kubeadm.go:310] W0916 10:23:39.265300    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.802965   12642 kubeadm.go:310] W0916 10:23:39.265967    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.803297   12642 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:47.803488   12642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:47.803508   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:47.803517   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:47.805594   12642 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:47.806930   12642 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:47.811723   12642 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:47.811744   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:47.829314   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:48.045373   12642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:48.045433   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.045434   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-821781 minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-821781 minikube.k8s.io/primary=true
	I0916 10:23:48.053143   12642 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:48.121750   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.622580   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.121829   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.622144   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.122640   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.622473   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.122549   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.622693   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.122279   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.622129   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.815735   12642 kubeadm.go:1113] duration metric: took 4.770357411s to wait for elevateKubeSystemPrivileges
	I0916 10:23:52.815769   12642 kubeadm.go:394] duration metric: took 13.702442151s to StartCluster
	I0916 10:23:52.815790   12642 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.815914   12642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:52.816324   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.816539   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:52.816545   12642 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:52.816616   12642 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:52.816735   12642 addons.go:69] Setting yakd=true in profile "addons-821781"
	I0916 10:23:52.816749   12642 addons.go:69] Setting ingress-dns=true in profile "addons-821781"
	I0916 10:23:52.816756   12642 addons.go:69] Setting default-storageclass=true in profile "addons-821781"
	I0916 10:23:52.816766   12642 addons.go:69] Setting inspektor-gadget=true in profile "addons-821781"
	I0916 10:23:52.816771   12642 addons.go:234] Setting addon ingress-dns=true in "addons-821781"
	I0916 10:23:52.816777   12642 addons.go:234] Setting addon inspektor-gadget=true in "addons-821781"
	I0916 10:23:52.816781   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.816788   12642 addons.go:69] Setting cloud-spanner=true in profile "addons-821781"
	I0916 10:23:52.816798   12642 addons.go:234] Setting addon cloud-spanner=true in "addons-821781"
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816821   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816815   12642 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-821781"
	I0916 10:23:52.816831   12642 addons.go:69] Setting volumesnapshots=true in profile "addons-821781"
	I0916 10:23:52.816846   12642 addons.go:234] Setting addon volumesnapshots=true in "addons-821781"
	I0916 10:23:52.816852   12642 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-821781"
	I0916 10:23:52.816859   12642 addons.go:69] Setting gcp-auth=true in profile "addons-821781"
	I0916 10:23:52.816864   12642 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-821781"
	I0916 10:23:52.816869   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816875   12642 mustload.go:65] Loading cluster: addons-821781
	I0916 10:23:52.816879   12642 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:52.816885   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816897   12642 addons.go:69] Setting ingress=true in profile "addons-821781"
	I0916 10:23:52.816908   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816914   12642 addons.go:234] Setting addon ingress=true in "addons-821781"
	I0916 10:23:52.816821   12642 addons.go:69] Setting storage-provisioner=true in profile "addons-821781"
	I0916 10:23:52.816951   12642 addons.go:234] Setting addon storage-provisioner=true in "addons-821781"
	I0916 10:23:52.816952   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816967   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816991   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.817237   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817375   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816847   12642 addons.go:69] Setting helm-tiller=true in profile "addons-821781"
	I0916 10:23:52.817387   12642 addons.go:69] Setting registry=true in profile "addons-821781"
	I0916 10:23:52.817393   12642 addons.go:234] Setting addon helm-tiller=true in "addons-821781"
	I0916 10:23:52.817398   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817399   12642 addons.go:234] Setting addon registry=true in "addons-821781"
	I0916 10:23:52.817413   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817421   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817453   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817460   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817835   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817839   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.818548   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816758   12642 addons.go:234] Setting addon yakd=true in "addons-821781"
	I0916 10:23:52.818812   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816831   12642 addons.go:69] Setting metrics-server=true in profile "addons-821781"
	I0916 10:23:52.819624   12642 addons.go:234] Setting addon metrics-server=true in "addons-821781"
	I0916 10:23:52.819661   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816777   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-821781"
	I0916 10:23:52.820048   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820121   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820925   12642 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:52.817377   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.823819   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:52.819369   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817378   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816830   12642 addons.go:69] Setting volcano=true in profile "addons-821781"
	I0916 10:23:52.827260   12642 addons.go:234] Setting addon volcano=true in "addons-821781"
	I0916 10:23:52.827341   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.827903   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816822   12642 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-821781"
	I0916 10:23:52.828667   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-821781"
	I0916 10:23:52.846468   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.849708   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.849779   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.858180   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:52.860117   12642 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:52.861491   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:52.861515   12642 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:52.861580   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.861792   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:52.863536   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:52.865265   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:52.868592   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:52.871812   12642 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:52.873467   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:52.873491   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:52.873553   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.873826   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:52.875500   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:52.876891   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:52.878274   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:52.878295   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:52.878358   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.885380   12642 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:52.887180   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:52.887200   12642 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:52.887253   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.887590   12642 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:52.889278   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:52.889293   12642 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:52.891126   12642 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:52.891146   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:52.891207   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.891375   12642 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:52.893052   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.893213   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:52.893225   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:52.893284   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.895906   12642 addons.go:234] Setting addon default-storageclass=true in "addons-821781"
	I0916 10:23:52.895950   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.896395   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.902602   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.904755   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:52.904779   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:52.904841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.913208   12642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:52.916490   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:52.916516   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:52.916578   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.920102   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.921373   12642 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:52.924287   12642 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:52.924310   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:52.924367   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.924567   12642 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:52.924966   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.927248   12642 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:52.927271   12642 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:52.927324   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	W0916 10:23:52.939182   12642 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:23:52.945562   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.947311   12642 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:52.949640   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:52.949813   12642 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:52.949828   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:52.949883   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.950915   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:52.950951   12642 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:52.951010   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.967061   12642 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-821781"
	I0916 10:23:52.967112   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.967600   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.976558   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.977128   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979407   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979587   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979666   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982295   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982301   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984209   12642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:52.984228   12642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:52.984267   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984282   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.985867   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.992433   12642 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:52.996036   12642 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:52.998876   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:52.998899   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:52.998966   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:53.007398   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.031542   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.198285   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:53.222232   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:53.223607   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:53.303303   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:53.303391   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:53.412003   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:53.494460   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:53.495317   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:53.495388   12642 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:53.500279   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:53.500366   12642 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:53.518431   12642 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:53.518460   12642 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:53.595357   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:53.595389   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:53.595502   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:53.595520   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:53.601235   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:53.601265   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:53.603514   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:53.610819   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:53.613851   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:53.696891   12642 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:53.696920   12642 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:53.697186   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:53.711949   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:53.711981   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:53.793955   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:53.794047   12642 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:53.795627   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:53.795652   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:53.810579   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:53.810623   12642 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:53.818121   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:53.818143   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:54.008884   12642 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:54.008915   12642 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:54.097416   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:54.097502   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:54.105048   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:54.114541   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:54.116113   12642 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.116175   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:54.194093   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:54.194181   12642 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:54.310015   12642 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:54.310107   12642 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:54.315950   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:54.316029   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:54.409828   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.595664   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:54.595750   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:54.795049   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:54.795131   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:54.795986   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:54.796042   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:54.798857   12642 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.60047423s)
	I0916 10:23:54.798970   12642 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:54.798946   12642 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.576635993s)
	I0916 10:23:54.799977   12642 node_ready.go:35] waiting up to 6m0s for node "addons-821781" to be "Ready" ...
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:54.816489   12642 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:54.816544   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:55.096307   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:55.096398   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:55.098163   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:55.303720   12642 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:55.303802   12642 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:55.310866   12642 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:55.310939   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:55.509740   12642 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-821781" context rescaled to 1 replicas
	I0916 10:23:55.603909   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:55.603992   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:55.609116   12642 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:55.609197   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:55.701381   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:56.095470   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:56.095499   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:56.106357   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:56.115945   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.892303376s)
	I0916 10:23:56.209795   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:56.209873   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:56.410426   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:56.410515   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:56.511332   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.511408   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:56.813818   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.895029   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:58.497986   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.085861545s)
	I0916 10:23:58.498185   12642 addons.go:475] Verifying addon ingress=true in "addons-821781"
	I0916 10:23:58.498214   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.894594589s)
	I0916 10:23:58.498365   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.801136889s)
	I0916 10:23:58.498429   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.393306067s)
	I0916 10:23:58.498499   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.383877389s)
	I0916 10:23:58.498516   12642 addons.go:475] Verifying addon metrics-server=true in "addons-821781"
	I0916 10:23:58.498551   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.08869279s)
	I0916 10:23:58.498561   12642 addons.go:475] Verifying addon registry=true in "addons-821781"
	I0916 10:23:58.498687   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.40044143s)
	I0916 10:23:58.498148   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.003579441s)
	I0916 10:23:58.498265   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.887343223s)
	I0916 10:23:58.498721   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.884394452s)
	I0916 10:23:58.500166   12642 out.go:177] * Verifying registry addon...
	I0916 10:23:58.500186   12642 out.go:177] * Verifying ingress addon...
	I0916 10:23:58.500168   12642 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-821781 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:58.502840   12642 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:23:58.502984   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0916 10:23:58.505976   12642 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:23:58.508066   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:58.508081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:58.508299   12642 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:23:58.508315   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.012329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.110843   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.299182   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.597694462s)
	W0916 10:23:59.299228   12642 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299250   12642 retry.go:31] will retry after 144.288551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299277   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.19282086s)
	I0916 10:23:59.305158   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:59.444238   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.506924   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.507806   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.539307   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.725399907s)
	I0916 10:23:59.539335   12642 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:59.541718   12642 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:59.543660   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:59.597366   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:59.597452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.006951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.007539   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.096393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.099134   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:00.099205   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.125424   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.418412   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:00.508361   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.509838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.518754   12642 addons.go:234] Setting addon gcp-auth=true in "addons-821781"
	I0916 10:24:00.518809   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:24:00.519365   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:24:00.536851   12642 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:00.536902   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.553493   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.596428   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.006170   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.006803   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.047121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.506287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.506534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.547185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.805560   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:02.007448   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.008038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.046600   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.202834   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.758545356s)
	I0916 10:24:02.202854   12642 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.665973141s)
	I0916 10:24:02.205053   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:02.206664   12642 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:02.208283   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:02.208296   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:02.226305   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:02.226333   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:02.244167   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.244187   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:02.298853   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.506489   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.506968   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.547297   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.899621   12642 addons.go:475] Verifying addon gcp-auth=true in "addons-821781"
	I0916 10:24:02.901591   12642 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:02.904224   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:02.907029   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:02.907051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.007207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.007880   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.047134   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.407111   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.506509   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.507075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.547522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.907027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.007265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.007643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.046594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.303245   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:04.407879   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.506365   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.506939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.547412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.907817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.006397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.007232   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.047038   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.407918   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.506892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.507154   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.547266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.907671   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.006358   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.006625   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.046717   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.407766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.506364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.506750   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.547000   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.803631   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:06.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.006037   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.006551   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.046971   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.407314   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.506338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.547256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.907021   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.005785   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.006334   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.046439   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.408357   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.505952   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.506643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.547247   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.803661   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:08.907343   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.006189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.046966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.407657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.506182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.506608   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.546942   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.907283   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.005977   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.006337   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.046685   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.408104   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.507241   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.547393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.907115   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.005778   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.006115   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.047296   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.302797   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:11.407398   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.506075   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.506794   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.546885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.907330   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.006053   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.046997   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.407912   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.506528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.507006   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.547228   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.907413   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.006062   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.006437   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.303472   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:13.407845   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.506423   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.547162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.907106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.005737   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.006410   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.047326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.407189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.505915   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.506316   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.547399   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.907535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.007080   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.046972   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.407693   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.506219   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.506709   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.547052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.803455   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:15.907823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.006647   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.007106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.047456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.407960   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.506331   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.547157   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.907551   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.006299   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.006617   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.047040   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.406899   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.506449   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.506938   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.547210   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.907861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.006488   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.006990   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.046795   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.303390   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:18.408194   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.505660   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.506075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.547467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.908947   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.006658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.007120   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.047574   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.407694   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.506237   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.506764   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.546743   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.907775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.006250   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.006926   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.046950   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.407914   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.506444   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.506893   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.547165   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.802891   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:20.908266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.006168   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.006661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.046763   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.407620   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.506280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.506758   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.547207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.907808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.006390   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.006832   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.047258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.407294   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.506192   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.506573   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.546892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.803612   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:22.907631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.006412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.006789   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.407703   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.506242   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.506922   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.546531   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.907989   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.006557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.007064   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.047256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.407245   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.506027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.506326   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.546265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.907143   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.006149   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.006574   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.303085   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:25.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.506502   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.506958   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.549041   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.907130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.005689   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.006094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.047573   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.407949   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.506465   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.506873   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.547130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.907930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.006498   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.006899   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.047132   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.303541   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:27.407076   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.505560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.506083   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.547418   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.907322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.006007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.006289   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.046769   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.506106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.506493   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.547121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.907052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.005692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.006125   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.047636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.407566   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.506440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.506780   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.547158   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.802646   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:29.907185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.005875   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.006320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.046391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.407344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.505998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.506431   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.546833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.006755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.007344   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.047565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.407650   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.506485   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.506906   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.547281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.803334   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:31.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.006411   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.006716   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.047171   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.407108   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.505792   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.506357   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.547493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.907787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.007161   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.047511   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.407346   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.506125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.506509   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.547645   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.803187   12642 node_ready.go:49] node "addons-821781" has status "Ready":"True"
	I0916 10:24:33.803213   12642 node_ready.go:38] duration metric: took 39.003174602s for node "addons-821781" to be "Ready" ...
	I0916 10:24:33.803225   12642 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:33.970599   12642 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:34.069001   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.088106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.088355   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:34.088380   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.088736   12642 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:34.088757   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.407852   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.508926   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.509671   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.609806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.907890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.006456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.006807   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.047745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.407857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.476382   12642 pod_ready.go:93] pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.476406   12642 pod_ready.go:82] duration metric: took 1.50577246s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.476429   12642 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480336   12642 pod_ready.go:93] pod "etcd-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.480359   12642 pod_ready.go:82] duration metric: took 3.921757ms for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480374   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484379   12642 pod_ready.go:93] pod "kube-apiserver-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.484399   12642 pod_ready.go:82] duration metric: took 4.01835ms for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484407   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488483   12642 pod_ready.go:93] pod "kube-controller-manager-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.488502   12642 pod_ready.go:82] duration metric: took 4.089026ms for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488513   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492259   12642 pod_ready.go:93] pod "kube-proxy-7grrw" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.492277   12642 pod_ready.go:82] duration metric: took 3.758267ms for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492286   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.508978   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.509276   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.548257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.875363   12642 pod_ready.go:93] pod "kube-scheduler-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.875387   12642 pod_ready.go:82] duration metric: took 383.093988ms for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.875399   12642 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.907718   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.006857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.047708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.407759   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.506231   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.506532   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.547623   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.908178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.009196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.009613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.111822   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.408212   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.507815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.508955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.597930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.899332   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.907966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.007593   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.007941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.096688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.407803   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.507008   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.507185   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.548820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.912820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.007788   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.007812   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.048263   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.506945   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.507715   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.548866   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.908787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.007032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.007632   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.048796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.398719   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:40.407487   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.507397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.507772   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.548227   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.908344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.009557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.009817   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.048882   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.407443   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.507386   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.507614   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.547783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.907344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.006438   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.006755   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.047817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.407604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.506506   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.506862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.548258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.880576   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:42.907125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.006570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.006955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.048271   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.407864   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.507257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.507492   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.548688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.907268   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.006139   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.006358   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.048808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.408058   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.506983   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.507322   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.548244   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.907777   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.007224   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.007575   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.048360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.381456   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:45.408061   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.507492   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.507642   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.548176   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.907279   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.006236   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.407829   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.507175   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.507613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.549215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.908356   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.007293   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.007559   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.098016   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.398953   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:47.408142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.507848   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.508575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.597783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.907504   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.006545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.047872   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.408467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.506796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.507040   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.548302   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.907911   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.007377   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.007799   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.048150   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.407649   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.506584   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.507145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.548392   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.881772   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:49.907684   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.006877   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.007616   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.048576   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.408384   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.509092   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.509234   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.548191   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.907565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.008280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.008548   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.048447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.407510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.506404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.547570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.900427   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:51.908013   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.008311   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.009178   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.098159   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.407616   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.506895   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.507402   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.548326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.907362   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.008415   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.009033   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.110477   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.408669   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.508937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.509320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.548259   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.907440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.006459   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.047766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.381253   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:54.408025   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.506984   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.548500   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.907545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.007055   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.007267   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.048307   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.407381   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.506329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.506924   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.547861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.907031   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.007920   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.048290   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.407755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.508288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.508534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.547447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.880835   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:56.907604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.008980   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.009246   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.048404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.408337   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.506591   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.506714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.547844   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.907931   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.007018   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.007364   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.048745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.407890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.506768   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.507350   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.548030   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.883327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:58.908144   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.008937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.010047   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.048751   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.407088   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.507067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.507939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.597408   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.907493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.006520   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.006934   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.407658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.507503   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.548304   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.908137   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.007637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.007838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.048049   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.381960   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:01.407780   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.506951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.507128   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.549865   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.908484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.009640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.009714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.047344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.407125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.506639   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.547791   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.908024   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.007189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.007861   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.048215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.408697   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.509655   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.509879   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.547998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.881604   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:03.907142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.006400   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.006547   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.047579   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.407594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.509746   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.510002   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.547819   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.907345   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.006657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.006921   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.048328   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.407535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.506637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.506876   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.548360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.881794   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:05.907547   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.006578   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.007101   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.047920   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.408051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.506012   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.506238   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.548610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.006786   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.007057   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.048484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.407806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.506692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.506986   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.548007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.907772   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.006701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.006970   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.047834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.394559   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:08.408017   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.507156   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.507728   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.597758   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.907919   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.007661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.098454   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.408318   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.509364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.510773   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.598483   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.908201   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.008441   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.009850   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.102292   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.398327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:10.408466   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.507500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.507925   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.548323   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.907708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.006815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.008091   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.047722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.407736   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.507196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.507427   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.599680   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.907752   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.007430   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.007699   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.047776   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.407516   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.506452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.506628   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.550195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.880927   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:12.907727   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.007178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.007457   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.407946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.507322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.507501   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.547784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.908011   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.007871   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.008085   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.049162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.407342   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.506366   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.507489   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.597388   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.881914   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:14.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.007276   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.008484   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.097577   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.407927   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.507867   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.508145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.548701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.909823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.012269   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.012490   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.112080   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.407823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.506640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.507038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.547677   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.908338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.006229   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.006500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.047433   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.380841   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:17.408141   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.507281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.507422   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.548306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.908216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.005946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.006253   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.048471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.407630   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.506857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.507586   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.547722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.908142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.007287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.007657   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.048873   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.399218   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:19.408522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.506838   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.506974   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.548754   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.907508   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.006666   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.007738   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.096885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.407683   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.507079   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.507594   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.549277   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.938821   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.007125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.007361   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.049052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.408461   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.506721   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.507045   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.548148   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.881149   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:21.907701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.007091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.007530   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.108828   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.408067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.507251   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.507505   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.549744   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.908512   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.006557   12642 kapi.go:107] duration metric: took 1m24.503572468s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:23.007211   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.050575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.408216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.507222   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.548029   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.881704   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:23.907636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.006951   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.048091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.407560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.506856   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.548705   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.907750   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.006941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.048097   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.408473   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.507086   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.548651   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.907834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.007469   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.415775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.417875   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:26.507746   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.549493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.908404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.009635   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.048391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.408105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.509068   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.548222   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.908042   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.007883   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.047932   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.408370   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.507379   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.548467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.898654   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:28.907039   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.007310   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.048105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.407790   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.507440   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.598195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.907810   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.007961   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.407748   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.548456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.908206   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.007623   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.048306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.380691   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:31.407719   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.506896   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.547878   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.907840   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.007212   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.048133   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.407238   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.506798   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.548528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.907455   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.006747   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.047570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.381514   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:33.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.506478   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.548374   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.907944   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.007347   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.048784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.408200   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.506244   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.548189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.907539   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.006862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.049282   12642 kapi.go:107] duration metric: took 1m35.505619997s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:25:35.407599   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.881121   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:35.907998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.007303   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.407476   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.506940   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.006647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.408081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.507464   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.908184   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.007201   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.381474   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:38.407986   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.508647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.908946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.008435   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.408471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.510473   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.995610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.008869   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.397632   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:40.408032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.509659   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.907933   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.007031   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.408056   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.508041   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.908287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.006885   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.407440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.880849   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:42.907379   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.008348   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.408661   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.907189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.006692   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.407965   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.507074   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.908416   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.006411   12642 kapi.go:107] duration metric: took 1m46.503572843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:45.381179   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:45.459019   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.907457   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.408510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.907182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.396594   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:47.407631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.908030   12642 kapi.go:107] duration metric: took 1m45.003803312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:47.909696   12642 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-821781 cluster.
	I0916 10:25:47.911374   12642 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:47.913470   12642 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:47.915138   12642 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, helm-tiller, metrics-server, storage-provisioner, cloud-spanner, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:47.916678   12642 addons.go:510] duration metric: took 1m55.100061322s for enable addons: enabled=[ingress-dns nvidia-device-plugin helm-tiller metrics-server storage-provisioner cloud-spanner yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:49.881225   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:52.381442   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:54.380287   12642 pod_ready.go:93] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.380308   12642 pod_ready.go:82] duration metric: took 1m18.504902601s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.380318   12642 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384430   12642 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.384450   12642 pod_ready.go:82] duration metric: took 4.126025ms for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384468   12642 pod_ready.go:39] duration metric: took 1m20.581229133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:25:54.384485   12642 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:25:54.384513   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:54.384564   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:54.417384   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.417411   12642 cri.go:89] found id: ""
	I0916 10:25:54.417421   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:54.417476   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.420785   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:54.420839   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:54.452868   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.452890   12642 cri.go:89] found id: ""
	I0916 10:25:54.452898   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:54.452950   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.456066   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:54.456119   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:54.487907   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:54.487930   12642 cri.go:89] found id: ""
	I0916 10:25:54.487938   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:54.487992   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.491215   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:54.491266   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:54.523745   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.523766   12642 cri.go:89] found id: ""
	I0916 10:25:54.523775   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:54.523831   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.527161   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:54.527229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:54.560095   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.560123   12642 cri.go:89] found id: ""
	I0916 10:25:54.560133   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:54.560180   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.563529   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:54.563589   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:54.596576   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:54.596600   12642 cri.go:89] found id: ""
	I0916 10:25:54.596608   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:54.596655   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.599825   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:54.599906   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:54.632507   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:54.632531   12642 cri.go:89] found id: ""
	I0916 10:25:54.632539   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:54.632620   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.635882   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:54.635906   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:54.698451   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:54.698492   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:54.799766   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:54.799797   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.843933   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:54.843963   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.894142   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:54.894174   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.934257   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:54.934288   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.967135   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:54.967163   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:55.001104   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:55.001133   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:55.013631   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:55.013663   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:55.047469   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:55.047499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:55.106750   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:55.106787   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:55.182277   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:55.182324   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:57.726595   12642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:25:57.740119   12642 api_server.go:72] duration metric: took 2m4.923540882s to wait for apiserver process to appear ...
	I0916 10:25:57.740152   12642 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:25:57.740187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:57.740229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:57.772533   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:57.772558   12642 cri.go:89] found id: ""
	I0916 10:25:57.772566   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:57.772615   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.775778   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:57.775838   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:57.813245   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:57.813271   12642 cri.go:89] found id: ""
	I0916 10:25:57.813281   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:57.813354   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.817691   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:57.817769   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:57.851306   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:57.851328   12642 cri.go:89] found id: ""
	I0916 10:25:57.851335   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:57.851378   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.854640   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:57.854706   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:57.904175   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:57.904198   12642 cri.go:89] found id: ""
	I0916 10:25:57.904205   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:57.904252   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.907938   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:57.907996   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:57.941402   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:57.941421   12642 cri.go:89] found id: ""
	I0916 10:25:57.941428   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:57.941481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.944741   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:57.944796   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:57.979020   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:57.979042   12642 cri.go:89] found id: ""
	I0916 10:25:57.979051   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:57.979108   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.982381   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:57.982431   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:58.014858   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:58.014881   12642 cri.go:89] found id: ""
	I0916 10:25:58.014890   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:58.014937   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:58.018251   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:58.018272   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:58.050812   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:58.050847   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:58.108286   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:58.108318   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:58.182964   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:58.183002   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:58.248089   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:58.248126   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:58.260293   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:58.260339   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:58.355509   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:58.355535   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:58.398314   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:58.398350   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:58.445703   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:58.445736   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:58.485997   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:58.486025   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:58.519971   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:58.519998   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:58.558470   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:58.558499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.092930   12642 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:26:01.096706   12642 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:26:01.097615   12642 api_server.go:141] control plane version: v1.31.1
	I0916 10:26:01.097635   12642 api_server.go:131] duration metric: took 3.357476241s to wait for apiserver health ...
	I0916 10:26:01.097642   12642 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:26:01.097662   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:26:01.097709   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:26:01.131450   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.131477   12642 cri.go:89] found id: ""
	I0916 10:26:01.131489   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:26:01.131542   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.134752   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:26:01.134813   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:26:01.166978   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.167002   12642 cri.go:89] found id: ""
	I0916 10:26:01.167014   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:26:01.167057   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.170770   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:26:01.170821   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:26:01.203544   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.203564   12642 cri.go:89] found id: ""
	I0916 10:26:01.203571   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:26:01.203632   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.207027   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:26:01.207101   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:26:01.240766   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.240787   12642 cri.go:89] found id: ""
	I0916 10:26:01.240795   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:26:01.240847   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.244187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:26:01.244242   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:26:01.278657   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.278686   12642 cri.go:89] found id: ""
	I0916 10:26:01.278696   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:26:01.278754   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.282264   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:26:01.282333   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:26:01.316408   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.316431   12642 cri.go:89] found id: ""
	I0916 10:26:01.316439   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:26:01.316481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.319848   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:26:01.319913   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:26:01.352617   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.352637   12642 cri.go:89] found id: ""
	I0916 10:26:01.352645   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:26:01.352692   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.356052   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:26:01.356078   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:26:01.430171   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:26:01.430203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:26:01.471970   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:26:01.472001   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.512405   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:26:01.512437   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.545482   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:26:01.545511   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:26:01.657458   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:26:01.657495   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.703167   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:26:01.703203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.753488   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:26:01.753528   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.788778   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:26:01.788809   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.847216   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:26:01.847252   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.883444   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:26:01.883479   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:26:01.950602   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:26:01.950637   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:26:04.473621   12642 system_pods.go:59] 19 kube-system pods found
	I0916 10:26:04.473667   12642 system_pods.go:61] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.473674   12642 system_pods.go:61] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.473678   12642 system_pods.go:61] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.473681   12642 system_pods.go:61] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.473685   12642 system_pods.go:61] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.473688   12642 system_pods.go:61] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.473692   12642 system_pods.go:61] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.473696   12642 system_pods.go:61] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.473699   12642 system_pods.go:61] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.473702   12642 system_pods.go:61] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.473706   12642 system_pods.go:61] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.473709   12642 system_pods.go:61] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.473712   12642 system_pods.go:61] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.473715   12642 system_pods.go:61] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.473718   12642 system_pods.go:61] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.473722   12642 system_pods.go:61] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.473725   12642 system_pods.go:61] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.473728   12642 system_pods.go:61] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.473731   12642 system_pods.go:61] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.473737   12642 system_pods.go:74] duration metric: took 3.376089349s to wait for pod list to return data ...
	I0916 10:26:04.473747   12642 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:26:04.476243   12642 default_sa.go:45] found service account: "default"
	I0916 10:26:04.476265   12642 default_sa.go:55] duration metric: took 2.512507ms for default service account to be created ...
	I0916 10:26:04.476273   12642 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:26:04.484719   12642 system_pods.go:86] 19 kube-system pods found
	I0916 10:26:04.484756   12642 system_pods.go:89] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.484762   12642 system_pods.go:89] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.484766   12642 system_pods.go:89] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.484770   12642 system_pods.go:89] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.484774   12642 system_pods.go:89] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.484778   12642 system_pods.go:89] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.484782   12642 system_pods.go:89] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.484786   12642 system_pods.go:89] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.484790   12642 system_pods.go:89] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.484796   12642 system_pods.go:89] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.484800   12642 system_pods.go:89] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.484803   12642 system_pods.go:89] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.484807   12642 system_pods.go:89] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.484812   12642 system_pods.go:89] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.484818   12642 system_pods.go:89] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.484822   12642 system_pods.go:89] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.484826   12642 system_pods.go:89] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.484830   12642 system_pods.go:89] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.484834   12642 system_pods.go:89] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.484840   12642 system_pods.go:126] duration metric: took 8.563189ms to wait for k8s-apps to be running ...
	I0916 10:26:04.484851   12642 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:26:04.484897   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:26:04.496212   12642 system_svc.go:56] duration metric: took 11.351945ms WaitForService to wait for kubelet
	I0916 10:26:04.496239   12642 kubeadm.go:582] duration metric: took 2m11.67966753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:26:04.496261   12642 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:26:04.499350   12642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:26:04.499377   12642 node_conditions.go:123] node cpu capacity is 8
	I0916 10:26:04.499389   12642 node_conditions.go:105] duration metric: took 3.122952ms to run NodePressure ...
	I0916 10:26:04.499400   12642 start.go:241] waiting for startup goroutines ...
	I0916 10:26:04.499406   12642 start.go:246] waiting for cluster config update ...
	I0916 10:26:04.499455   12642 start.go:255] writing updated cluster config ...
	I0916 10:26:04.519561   12642 ssh_runner.go:195] Run: rm -f paused
	I0916 10:26:04.665202   12642 out.go:177] * Done! kubectl is now configured to use "addons-821781" cluster and "default" namespace by default
	E0916 10:26:04.666644   12642 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.766446169Z" level=info msg="Checking image status: ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" id=c5de6159-8790-4c29-8058-062f3cd01e72 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.767540236Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,RepoTags:[],RepoDigests:[ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971 ghcr.io/headlamp-k8s/headlamp@sha256:c8e183672fcb6f4816fdd2e13c520f7a1946297aa70dd1c46f83bf859c8dd5ec],Size_:187495815,Uid:nil,Username:headlamp,Spec:nil,},Info:map[string]string{},}" id=c5de6159-8790-4c29-8058-062f3cd01e72 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.768366013Z" level=info msg="Creating container: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=6333ebfa-7f47-4891-81bf-b5e60ab69798 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.768477983Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.819023840Z" level=info msg="Created container 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=6333ebfa-7f47-4891-81bf-b5e60ab69798 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.819618141Z" level=info msg="Starting container: 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557" id=9ece8bd9-e051-4e9c-a08a-174a05cbaebe name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:26:57 addons-821781 crio[1028]: time="2024-09-16 10:26:57.825780436Z" level=info msg="Started container" PID=8858 containerID=34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 description=headlamp/headlamp-57fb76fcdb-xfkdj/headlamp id=9ece8bd9-e051-4e9c-a08a-174a05cbaebe name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.044153577Z" level=info msg="Stopping container: 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 (timeout: 30s)" id=59988acf-cbf5-4ccc-b391-0d71d7d986dc name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:04 addons-821781 conmon[8845]: conmon 34675749bf60eae87e1a <ninfo>: container 8858 exited with status 2
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.173583792Z" level=info msg="Stopped container 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=59988acf-cbf5-4ccc-b391-0d71d7d986dc name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.174150719Z" level=info msg="Stopping pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=5920ec82-b971-47e8-ab8f-97f10512b921 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.174391947Z" level=info msg="Got pod network &{Name:headlamp-57fb76fcdb-xfkdj Namespace:headlamp ID:a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a UID:cad0d003-8455-4239-998d-1327610acea6 NetNS:/var/run/netns/55d20309-9c81-477c-9b7b-a9b7cabae71c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.174556187Z" level=info msg="Deleting pod headlamp_headlamp-57fb76fcdb-xfkdj from CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.210730567Z" level=info msg="Stopped pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=5920ec82-b971-47e8-ab8f-97f10512b921 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.932887074Z" level=info msg="Removing container: 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557" id=7c971f4c-d380-4cd4-ad5a-169db70dfa55 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:04 addons-821781 crio[1028]: time="2024-09-16 10:27:04.946676009Z" level=info msg="Removed container 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557: headlamp/headlamp-57fb76fcdb-xfkdj/headlamp" id=7c971f4c-d380-4cd4-ad5a-169db70dfa55 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.031051552Z" level=info msg="Stopping container: 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f (timeout: 30s)" id=16404828-538d-4914-bc6a-34043446f331 name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:27 addons-821781 conmon[4128]: conmon 960e66cd3823f16f4a22 <ninfo>: container 4140 exited with status 2
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.167308595Z" level=info msg="Stopped container 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f: kube-system/tiller-deploy-b48cc5f79-jcsqv/tiller" id=16404828-538d-4914-bc6a-34043446f331 name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.167853011Z" level=info msg="Stopping pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=75e7f390-73a1-4c31-ae77-e6004ec4617f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.168131464Z" level=info msg="Got pod network &{Name:tiller-deploy-b48cc5f79-jcsqv Namespace:kube-system ID:5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc UID:3177a86a-dac6-4f73-acef-e8b6f8c0aed1 NetNS:/var/run/netns/b92901ae-3e92-487e-94be-09e4b8bf1ba5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.168308000Z" level=info msg="Deleting pod kube-system_tiller-deploy-b48cc5f79-jcsqv from CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.214863134Z" level=info msg="Stopped pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=75e7f390-73a1-4c31-ae77-e6004ec4617f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:27 addons-821781 crio[1028]: time="2024-09-16 10:27:27.985987444Z" level=info msg="Removing container: 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f" id=20c33ef3-37f7-4f43-97d7-23b173848fd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:28 addons-821781 crio[1028]: time="2024-09-16 10:27:28.002745748Z" level=info msg="Removed container 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f: kube-system/tiller-deploy-b48cc5f79-jcsqv/tiller" id=20c33ef3-37f7-4f43-97d7-23b173848fd1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	0dbc187486a77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 About a minute ago   Running             gcp-auth                                 0                   754882dcda596       gcp-auth-89d5ffd79-b6kzx
	3603c45c1e4ab       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             About a minute ago   Running             controller                               0                   31855714f04d8       ingress-nginx-controller-bc57996ff-8jlsc
	b6501ff69088d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	85a5122ba30eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	33527f5387a55       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	2b3dcba2a09e7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ea5a7e7486ae3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	5247d23b3a397       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago        Running             volume-snapshot-controller               0                   5faba155231dd       snapshot-controller-56fcc65765-tdxm7
	68547a0643ba6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago        Running             csi-resizer                              0                   4cb61d4296010       csi-hostpath-resizer-0
	a2eec9453e9d3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago        Running             csi-attacher                             0                   205f02ffaeb65       csi-hostpath-attacher-0
	d3033819602e2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago        Running             csi-external-health-monitor-controller   0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ffffb6d23a520       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   2 minutes ago        Exited              patch                                    0                   0defdefc8e690       ingress-nginx-admission-patch-22v56
	adcb6aad69051       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago        Running             volume-snapshot-controller               0                   b44ff8bf56a7c       snapshot-controller-56fcc65765-b752p
	d7c74998aab32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   2 minutes ago        Exited              create                                   0                   92efe213e3cc9       ingress-nginx-admission-create-dgb9n
	318be751079db       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             2 minutes ago        Running             local-path-provisioner                   0                   cdfaa5befff59       local-path-provisioner-86d989889c-6xhgj
	2a650198714d3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        2 minutes ago        Running             metrics-server                           0                   a92ded8c2c84e       metrics-server-84c5f94fbc-t6sfx
	9db25418c7b36       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             2 minutes ago        Running             minikube-ingress-dns                     0                   0a160d796662b       kube-ingress-dns-minikube
	fd1c0fa2e8742       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   578052293e511       storage-provisioner
	5fc078f948938       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             2 minutes ago        Running             coredns                                  0                   dd25c29f2c98b       coredns-7c65d6cfc9-f6b44
	8953bd3ac9bbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             3 minutes ago        Running             kube-proxy                               0                   31612ec902e41       kube-proxy-7grrw
	e3e02e9338f21       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             3 minutes ago        Running             kindnet-cni                              0                   efca226e04346       kindnet-2bwl4
	f7c9dd60c650e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             3 minutes ago        Running             kube-apiserver                           0                   325d1d3961d30       kube-apiserver-addons-821781
	aef3299386ef0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             3 minutes ago        Running             etcd                                     0                   5db6677261478       etcd-addons-821781
	23817b3f6401e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             3 minutes ago        Running             kube-scheduler                           0                   192ccdf49d648       kube-scheduler-addons-821781
	319dfee9ab334       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             3 minutes ago        Running             kube-controller-manager                  0                   471807181e888       kube-controller-manager-addons-821781
	
	
	==> coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] <==
	[INFO] 10.244.0.11:54433 - 5196 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117872s
	[INFO] 10.244.0.11:55203 - 39009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079023s
	[INFO] 10.244.0.11:55203 - 18278 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066179s
	[INFO] 10.244.0.11:53992 - 3361 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005725192s
	[INFO] 10.244.0.11:53992 - 5182 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005902528s
	[INFO] 10.244.0.11:58640 - 39752 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005962306s
	[INFO] 10.244.0.11:58640 - 45636 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007442692s
	[INFO] 10.244.0.11:58081 - 46876 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004814518s
	[INFO] 10.244.0.11:58081 - 7960 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005069952s
	[INFO] 10.244.0.11:56786 - 21825 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084442s
	[INFO] 10.244.0.11:56786 - 8540 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121405s
	[INFO] 10.244.0.21:49162 - 58748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183854s
	[INFO] 10.244.0.21:60540 - 21143 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264439s
	[INFO] 10.244.0.21:57612 - 22108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123843s
	[INFO] 10.244.0.21:56370 - 29690 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174744s
	[INFO] 10.244.0.21:53939 - 42345 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115165s
	[INFO] 10.244.0.21:54191 - 30184 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102696s
	[INFO] 10.244.0.21:43721 - 49242 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007714914s
	[INFO] 10.244.0.21:58502 - 61297 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008280312s
	[INFO] 10.244.0.21:45585 - 36043 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008154564s
	[INFO] 10.244.0.21:50514 - 10749 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008661461s
	[INFO] 10.244.0.21:41083 - 31758 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006832696s
	[INFO] 10.244.0.21:53762 - 8306 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007439813s
	[INFO] 10.244.0.21:37796 - 13809 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002178233s
	[INFO] 10.244.0.21:36516 - 28559 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002337896s
	
	
	==> describe nodes <==
	Name:               addons-821781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-821781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-821781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-821781
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-821781"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-821781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:27:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-821781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a93a1abfd8e74fb89ecb0b25fd80b840
	  System UUID:                c474d608-aa29-4551-b357-d17e9479a01d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-b6kzx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8jlsc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m30s
	  kube-system                 coredns-7c65d6cfc9-f6b44                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m36s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 csi-hostpathplugin-pwtwp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 etcd-addons-821781                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m41s
	  kube-system                 kindnet-2bwl4                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m36s
	  kube-system                 kube-apiserver-addons-821781                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 kube-controller-manager-addons-821781       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-proxy-7grrw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 kube-scheduler-addons-821781                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 metrics-server-84c5f94fbc-t6sfx             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m31s
	  kube-system                 snapshot-controller-56fcc65765-b752p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 snapshot-controller-56fcc65765-tdxm7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  local-path-storage          local-path-provisioner-86d989889c-6xhgj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 3m34s  kube-proxy       
	  Normal   Starting                 3m41s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m41s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m41s  kubelet          Node addons-821781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m41s  kubelet          Node addons-821781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m41s  kubelet          Node addons-821781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m37s  node-controller  Node addons-821781 event: Registered Node addons-821781 in Controller
	  Normal   NodeReady                2m55s  kubelet          Node addons-821781 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] <==
	{"level":"warn","ts":"2024-09-16T10:24:33.965134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.284694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-09-16T10:24:33.965140Z","caller":"traceutil/trace.go:171","msg":"trace[589393049] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.482158ms","start":"2024-09-16T10:24:33.834652Z","end":"2024-09-16T10:24:33.965134Z","steps":["trace[589393049] 'agreement among raft nodes before linearized reading'  (duration: 130.392783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.112983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs\" ","response":"range_response_count:1 size:560"}
	{"level":"warn","ts":"2024-09-16T10:24:33.965172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.412831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/default\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964790Z","caller":"traceutil/trace.go:171","msg":"trace[1719481168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:1; response_revision:871; }","duration":"130.308398ms","start":"2024-09-16T10:24:33.834475Z","end":"2024-09-16T10:24:33.964784Z","steps":["trace[1719481168] 'agreement among raft nodes before linearized reading'  (duration: 130.231604ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965031Z","caller":"traceutil/trace.go:171","msg":"trace[1439753586] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:871; }","duration":"130.351105ms","start":"2024-09-16T10:24:33.834675Z","end":"2024-09-16T10:24:33.965026Z","steps":["trace[1439753586] 'agreement among raft nodes before linearized reading'  (duration: 130.285964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.622694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
	{"level":"info","ts":"2024-09-16T10:24:33.965260Z","caller":"traceutil/trace.go:171","msg":"trace[3301844] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.644948ms","start":"2024-09-16T10:24:33.834605Z","end":"2024-09-16T10:24:33.965250Z","steps":["trace[3301844] 'agreement among raft nodes before linearized reading'  (duration: 130.58562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.745393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:1 size:878"}
	{"level":"info","ts":"2024-09-16T10:24:33.965091Z","caller":"traceutil/trace.go:171","msg":"trace[630312888] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.242708ms","start":"2024-09-16T10:24:33.834842Z","end":"2024-09-16T10:24:33.965085Z","steps":["trace[630312888] 'agreement among raft nodes before linearized reading'  (duration: 130.2013ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965306Z","caller":"traceutil/trace.go:171","msg":"trace[687212945] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:1; response_revision:871; }","duration":"130.768911ms","start":"2024-09-16T10:24:33.834532Z","end":"2024-09-16T10:24:33.965301Z","steps":["trace[687212945] 'agreement among raft nodes before linearized reading'  (duration: 130.728326ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965159Z","caller":"traceutil/trace.go:171","msg":"trace[1851867066] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:871; }","duration":"130.30942ms","start":"2024-09-16T10:24:33.834844Z","end":"2024-09-16T10:24:33.965154Z","steps":["trace[1851867066] 'agreement among raft nodes before linearized reading'  (duration: 130.267065ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965180Z","caller":"traceutil/trace.go:171","msg":"trace[395277833] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.138451ms","start":"2024-09-16T10:24:33.835036Z","end":"2024-09-16T10:24:33.965175Z","steps":["trace[395277833] 'agreement among raft nodes before linearized reading'  (duration: 130.084008ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.964761Z","caller":"traceutil/trace.go:171","msg":"trace[1846466404] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.050288ms","start":"2024-09-16T10:24:33.834699Z","end":"2024-09-16T10:24:33.964750Z","steps":["trace[1846466404] 'agreement among raft nodes before linearized reading'  (duration: 129.823354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.867331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964791Z","caller":"traceutil/trace.go:171","msg":"trace[1570104672] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"101.79293ms","start":"2024-09-16T10:24:33.862992Z","end":"2024-09-16T10:24:33.964785Z","steps":["trace[1570104672] 'agreement among raft nodes before linearized reading'  (duration: 101.763738ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965421Z","caller":"traceutil/trace.go:171","msg":"trace[1827982125] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:871; }","duration":"130.890995ms","start":"2024-09-16T10:24:33.834525Z","end":"2024-09-16T10:24:33.965416Z","steps":["trace[1827982125] 'agreement among raft nodes before linearized reading'  (duration: 130.852764ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965209Z","caller":"traceutil/trace.go:171","msg":"trace[945447364] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.449227ms","start":"2024-09-16T10:24:33.834754Z","end":"2024-09-16T10:24:33.965203Z","steps":["trace[945447364] 'agreement among raft nodes before linearized reading'  (duration: 130.396497ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.001003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-16T10:24:33.965579Z","caller":"traceutil/trace.go:171","msg":"trace[1490541276] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:871; }","duration":"131.063942ms","start":"2024-09-16T10:24:33.834502Z","end":"2024-09-16T10:24:33.965566Z","steps":["trace[1490541276] 'agreement among raft nodes before linearized reading'  (duration: 130.98224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.964852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.18611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/snapshot-controller\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2024-09-16T10:24:33.965093Z","caller":"traceutil/trace.go:171","msg":"trace[1524858032] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"129.821011ms","start":"2024-09-16T10:24:33.835267Z","end":"2024-09-16T10:24:33.965088Z","steps":["trace[1524858032] 'agreement among raft nodes before linearized reading'  (duration: 129.760392ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965632Z","caller":"traceutil/trace.go:171","msg":"trace[945136232] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/snapshot-controller; range_end:; response_count:1; response_revision:871; }","duration":"129.963575ms","start":"2024-09-16T10:24:33.835661Z","end":"2024-09-16T10:24:33.965624Z","steps":["trace[945136232] 'agreement among raft nodes before linearized reading'  (duration: 129.14136ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:26.413976Z","caller":"traceutil/trace.go:171","msg":"trace[182413184] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"129.574416ms","start":"2024-09-16T10:25:26.284376Z","end":"2024-09-16T10:25:26.413950Z","steps":["trace[182413184] 'process raft request'  (duration: 67.733345ms)","trace[182413184] 'compare'  (duration: 61.701552ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:48.300626Z","caller":"traceutil/trace.go:171","msg":"trace[869038067] transaction","detail":"{read_only:false; response_revision:1265; number_of_response:1; }","duration":"110.748846ms","start":"2024-09-16T10:25:48.189856Z","end":"2024-09-16T10:25:48.300605Z","steps":["trace[869038067] 'process raft request'  (duration: 107.391476ms)"],"step_count":1}
	
	
	==> gcp-auth [0dbc187486a77d691a5db4775360d83cdf6dd7084d4c3bd9123b7e051fd6bd74] <==
	2024/09/16 10:25:47 GCP Auth Webhook started!
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	
	
	==> kernel <==
	 10:27:28 up 9 min,  0 users,  load average: 0.72, 0.69, 0.32
	Linux addons-821781 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] <==
	I0916 10:25:23.298404       1 main.go:299] handling current node
	I0916 10:25:33.299058       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:33.299118       1 main.go:299] handling current node
	I0916 10:25:43.305413       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:43.305453       1 main.go:299] handling current node
	I0916 10:25:53.299376       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:25:53.299407       1 main.go:299] handling current node
	I0916 10:26:03.303024       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:03.303056       1 main.go:299] handling current node
	I0916 10:26:13.305426       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:13.305472       1 main.go:299] handling current node
	I0916 10:26:23.298370       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:23.298453       1 main.go:299] handling current node
	I0916 10:26:33.300653       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:33.300694       1 main.go:299] handling current node
	I0916 10:26:43.298403       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:43.298453       1 main.go:299] handling current node
	I0916 10:26:53.299220       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:26:53.299254       1 main.go:299] handling current node
	I0916 10:27:03.301422       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:27:03.301456       1 main.go:299] handling current node
	I0916 10:27:13.301464       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:27:13.301503       1 main.go:299] handling current node
	I0916 10:27:23.298916       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:27:23.298949       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] <==
	W0916 10:24:33.565907       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	W0916 10:24:33.565951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.565953       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	E0916 10:24:33.565979       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:33.599472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.599513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:58.720213       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 10:24:58.720232       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:58.720259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 10:24:58.720301       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:24:58.721354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 10:24:58.721362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:25:54.202103       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:25:54.202136       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.74.143:443: connect: connection refused" logger="UnhandledError"
	E0916 10:25:54.202195       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:25:54.215066       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:26:47.647164       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:26:48.662402       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:26:53.534738       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.40.159"}
	
	
	==> kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] <==
	E0916 10:26:49.484140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:26:51.446490       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:26:51.446537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:26:51.922555       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:26:51.922598       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:26:52.320583       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:26:52.320624       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:26:53.601416       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="52.049957ms"
	I0916 10:26:53.606395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="4.851613ms"
	I0916 10:26:53.606494       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="58.578µs"
	I0916 10:26:53.610282       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="43.744µs"
	W0916 10:26:55.044011       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:26:55.044048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:26:57.755257       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:26:57.926605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="51.47µs"
	I0916 10:26:57.939305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.337707ms"
	I0916 10:26:57.939375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="37.082µs"
	I0916 10:27:04.034685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="8.781µs"
	W0916 10:27:04.365551       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:04.365591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:27:14.151507       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0916 10:27:21.385941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-821781"
	I0916 10:27:27.020674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="7.724µs"
	W0916 10:27:28.351938       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:28.351975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] <==
	I0916 10:23:52.638596       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:52.921753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:52.922374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:53.313675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:53.319718       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:53.497957       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:53.508623       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:53.508659       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:53.510794       1 config.go:199] "Starting service config controller"
	I0916 10:23:53.510833       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:53.510868       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:53.510874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:53.511480       1 config.go:328] "Starting node config controller"
	I0916 10:23:53.511491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:53.617474       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:53.617556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:23:53.711794       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] <==
	W0916 10:23:44.897301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0916 10:23:44.897124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:44.898296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:44.897140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:44.898337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:44.898344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:45.722888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:45.722927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.731239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.731280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.734491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:23:45.734527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.741804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.741845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.771121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:45.771158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.886831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.886867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.913242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.913290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:46.023935       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:46.023972       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:23:48.220429       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:26:57 addons-821781 kubelet[1623]: I0916 10:26:57.925371    1623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-57fb76fcdb-xfkdj" podStartSLOduration=1.092461955 podStartE2EDuration="4.925327313s" podCreationTimestamp="2024-09-16 10:26:53 +0000 UTC" firstStartedPulling="2024-09-16 10:26:53.933051174 +0000 UTC m=+186.920335600" lastFinishedPulling="2024-09-16 10:26:57.765916532 +0000 UTC m=+190.753200958" observedRunningTime="2024-09-16 10:26:57.924456307 +0000 UTC m=+190.911740791" watchObservedRunningTime="2024-09-16 10:26:57.925327313 +0000 UTC m=+190.912611757"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.353069    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2w7k\" (UniqueName: \"kubernetes.io/projected/cad0d003-8455-4239-998d-1327610acea6-kube-api-access-x2w7k\") pod \"cad0d003-8455-4239-998d-1327610acea6\" (UID: \"cad0d003-8455-4239-998d-1327610acea6\") "
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.353125    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cad0d003-8455-4239-998d-1327610acea6-gcp-creds\") pod \"cad0d003-8455-4239-998d-1327610acea6\" (UID: \"cad0d003-8455-4239-998d-1327610acea6\") "
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.353208    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cad0d003-8455-4239-998d-1327610acea6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "cad0d003-8455-4239-998d-1327610acea6" (UID: "cad0d003-8455-4239-998d-1327610acea6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.354914    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad0d003-8455-4239-998d-1327610acea6-kube-api-access-x2w7k" (OuterVolumeSpecName: "kube-api-access-x2w7k") pod "cad0d003-8455-4239-998d-1327610acea6" (UID: "cad0d003-8455-4239-998d-1327610acea6"). InnerVolumeSpecName "kube-api-access-x2w7k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.454005    1623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x2w7k\" (UniqueName: \"kubernetes.io/projected/cad0d003-8455-4239-998d-1327610acea6-kube-api-access-x2w7k\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.454041    1623 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cad0d003-8455-4239-998d-1327610acea6-gcp-creds\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.931819    1623 scope.go:117] "RemoveContainer" containerID="34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.946973    1623 scope.go:117] "RemoveContainer" containerID="34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: E0916 10:27:04.947443    1623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557\": container with ID starting with 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 not found: ID does not exist" containerID="34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"
	Sep 16 10:27:04 addons-821781 kubelet[1623]: I0916 10:27:04.947483    1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557"} err="failed to get container status \"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557\": rpc error: code = NotFound desc = could not find container \"34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557\": container with ID starting with 34675749bf60eae87e1a8534e7afd3f9fd11b515a889fee2c34ee4b7feb3b557 not found: ID does not exist"
	Sep 16 10:27:05 addons-821781 kubelet[1623]: I0916 10:27:05.108889    1623 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad0d003-8455-4239-998d-1327610acea6" path="/var/lib/kubelet/pods/cad0d003-8455-4239-998d-1327610acea6/volumes"
	Sep 16 10:27:07 addons-821781 kubelet[1623]: E0916 10:27:07.229157    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482427229008349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:07 addons-821781 kubelet[1623]: E0916 10:27:07.229199    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482427229008349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:17 addons-821781 kubelet[1623]: E0916 10:27:17.231276    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482437231136432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:17 addons-821781 kubelet[1623]: E0916 10:27:17.231313    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482437231136432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:27 addons-821781 kubelet[1623]: E0916 10:27:27.233281    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482447233102868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:27 addons-821781 kubelet[1623]: E0916 10:27:27.233316    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482447233102868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:27 addons-821781 kubelet[1623]: I0916 10:27:27.413770    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcdts\" (UniqueName: \"kubernetes.io/projected/3177a86a-dac6-4f73-acef-e8b6f8c0aed1-kube-api-access-zcdts\") pod \"3177a86a-dac6-4f73-acef-e8b6f8c0aed1\" (UID: \"3177a86a-dac6-4f73-acef-e8b6f8c0aed1\") "
	Sep 16 10:27:27 addons-821781 kubelet[1623]: I0916 10:27:27.416533    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3177a86a-dac6-4f73-acef-e8b6f8c0aed1-kube-api-access-zcdts" (OuterVolumeSpecName: "kube-api-access-zcdts") pod "3177a86a-dac6-4f73-acef-e8b6f8c0aed1" (UID: "3177a86a-dac6-4f73-acef-e8b6f8c0aed1"). InnerVolumeSpecName "kube-api-access-zcdts". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:27:27 addons-821781 kubelet[1623]: I0916 10:27:27.516935    1623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zcdts\" (UniqueName: \"kubernetes.io/projected/3177a86a-dac6-4f73-acef-e8b6f8c0aed1-kube-api-access-zcdts\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:27:27 addons-821781 kubelet[1623]: I0916 10:27:27.984941    1623 scope.go:117] "RemoveContainer" containerID="960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f"
	Sep 16 10:27:28 addons-821781 kubelet[1623]: I0916 10:27:28.002967    1623 scope.go:117] "RemoveContainer" containerID="960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f"
	Sep 16 10:27:28 addons-821781 kubelet[1623]: E0916 10:27:28.003407    1623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f\": container with ID starting with 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f not found: ID does not exist" containerID="960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f"
	Sep 16 10:27:28 addons-821781 kubelet[1623]: I0916 10:27:28.003453    1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f"} err="failed to get container status \"960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f\": rpc error: code = NotFound desc = could not find container \"960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f\": container with ID starting with 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f not found: ID does not exist"
	
	
	==> storage-provisioner [fd1c0fa2e8742125904216a45b6d84f9b367888422cb6083d3e482fd77452994] <==
	I0916 10:24:34.797513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:34.805288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:34.805397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:34.813404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:34.813588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	I0916 10:24:34.814304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d6ca95d-581a-4537-b803-ac9e02f43ec1", APIVersion:"v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4 became leader
	I0916 10:24:34.914571       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-821781 -n addons-821781
helpers_test.go:261: (dbg) Run:  kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (367.366µs)
helpers_test.go:263: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/HelmTiller (82.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (362.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 10.306157ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-821781 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:570: (dbg) Non-zero exit: kubectl --context addons-821781 create -f testdata/csi-hostpath-driver/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (397.463µs)
addons_test.go:572: creating sample PVC with kubectl --context addons-821781 create -f testdata/csi-hostpath-driver/pvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (350.086µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.648µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.782µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (446.374µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (406.186µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.68µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (402.103µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (505.069µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.003µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.853µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.684µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (353.459µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (397.034µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (397.281µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.59µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (408.15µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (417.493µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.049µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.377µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.873µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (384.352µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.236µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.297µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.257µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.954µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (503.656µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.332µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.923µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (478.133µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.025µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (402.052µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.221µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.024µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.886µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.815µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.714µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.371µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.011µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (375.773µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.936µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (479.273µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (512.967µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.879µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.024µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (382.211µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.901µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.332µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.158µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.961µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.049µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.795µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.916µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.867µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.673µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.725µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (478.03µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.364µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.91µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (397.483µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.807µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.047µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.254µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.125µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.069µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.226µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (407.527µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (408.728µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.884µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.198µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (723.933µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.296µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.357µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (530.306µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.094µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.565µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.295µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.047µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (566.785µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.416µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.694µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.547µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.976µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (24.227863ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.222µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.734µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.317µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.732µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.15µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (417.438µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.963µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.428µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.864µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.981µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.679µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.283µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.573µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.234µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (416.161µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.266µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.618µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.991µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (513.835µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (512.099µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.823µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (395.414µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.559µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (515.385µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (494.739µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (519.09µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.493µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (420.705µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.843µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.979µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (520.903µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.82µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.043µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.779µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.407µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (540.524µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (403.127µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (416.077µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (521.7µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.864µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (489.198µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.142µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.557µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.951µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.417µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (408.762µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.265µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.07µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.549µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.946µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (520.637µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.699µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.711µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (590.696µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.263µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.048µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (530.516µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.85µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.303µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (496.685µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.036µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (523.317µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (410.74µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.574µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.384µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.813µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (535.708µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (492.39µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.967µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.225µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.329µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.01µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (544.785µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.164µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.01µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.798µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (547.85µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.808µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.438µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.158µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.836µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (550.123µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.243µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.971µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.336µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.546µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.975µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.602µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.078µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.784µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (446.06µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.923µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.643µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.098µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.972µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (536.99µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.073µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.46µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.309µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (532.455µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (714.856µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (534.957µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.603µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (502.834µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (489.662µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (531.57µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.156µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.8µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.788µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (513.585µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.68µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.331µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.675µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.454µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.085µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.166µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (504.613µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.486µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.011µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (580.576µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (581.924µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.762µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.473µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.37µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.61µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.136µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (24.015794ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.264µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.802µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.72µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.417µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (523.721µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.761µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.658µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (531.018µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (548.564µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.955µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.604µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.527µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.131µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (504.398µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (523.739µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.038µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.571µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.516µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.87µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (525.431µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.803µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (558.967µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (530.794µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.883µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.036µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.983µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (531.938µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (538.028µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (520.519µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (494.936µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.275µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.733µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.025µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (518.171µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (535.363µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (604.491µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (492.508µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.884µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.381µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (548.257µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.323µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (569.659µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.911µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.043µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (509µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.551µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.113µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.218µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (530.617µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.922µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (540.624µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.491µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (584.278µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (521.195µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (511.753µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (550.523µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.023µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.55µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.394µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (516.114µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.038µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.337µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (500.524µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.413µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (631.996µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (549.52µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (533.537µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (516.747µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (494.718µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (518.619µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.327µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (496.653µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.133µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (513.064µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.646µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.225µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (479.786µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.046µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (518.083µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (513.169µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (623.97µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (528.356µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.411µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (516.445µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (513.91µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (515.692µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.543µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (538.697µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (492.599µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.977µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (553.606µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.836µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (540.766µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.799µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.548µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.199µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.286µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (496.97µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (575.574µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (504.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.234µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.697µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.39µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (565.206µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.779µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.523µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.205µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.6µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (534.361µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (537.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (501.904µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.527µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (536.869µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.794µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.007µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.704µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.87µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (556.323µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.521µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (494.159µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.463µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (614.415µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (589.096µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.688µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.305µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.938µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (505.161µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (515.271µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.636µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (528.026µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.307µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (503.423µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.485µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (531.594µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (489.403µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.341µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (558.254µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (526.659µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (509.937µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.443µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (501.655µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.984µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (570.48µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (547.14µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (494.105µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (541.576µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (543.765µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (518.543µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-821781 get pvc hpvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.392µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: context deadline exceeded
addons_test.go:576: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-821781
helpers_test.go:235: (dbg) docker inspect addons-821781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9",
	        "Created": "2024-09-16T10:23:34.422231958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:34.564816551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/hosts",
	        "LogPath": "/var/lib/docker/containers/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9/60dd933522c237926528b6c5e7a212d79bef004f4c44f1811d7473015503feb9-json.log",
	        "Name": "/addons-821781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-821781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-821781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9fdb7f442f2bc8f8b1c531f827d1433e9c48eeb074478379abd81a838844673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-821781",
	                "Source": "/var/lib/docker/volumes/addons-821781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-821781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-821781",
	                "name.minikube.sigs.k8s.io": "addons-821781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb89cb54fc4711f104a02c8d2ebaaa0dae68769e21054477c7dd719ee876c61d",
	            "SandboxKey": "/var/run/docker/netns/cb89cb54fc47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-821781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "66d8d4a2fe0f9ff012a57288f3992a27df27bc2a73eb33a40ff3adbc0fa270ea",
	                    "EndpointID": "54da588c62c62ca60fdaac7dbe299e76b7fad63e791a3bfc770a096d3640b2fb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-821781",
	                        "60dd933522c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-821781 -n addons-821781
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-821781 logs -n 25: (1.22401852s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-534059              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-920673              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-534059              | download-only-534059   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-920673              | download-only-920673   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-291625               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-291625            | download-docker-291625 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-597115                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44611               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-597115              | binary-mirror-597115   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | disable dashboard -p                 | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| start   | -p addons-821781 --wait=true         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:26 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| ip      | addons-821781 ip                     | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | addons-821781                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons disable         | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821781 addons                 | addons-821781          | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:11.785613   12642 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:11.786005   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786020   12642 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:11.786026   12642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:11.786201   12642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:23:11.786846   12642 out.go:352] Setting JSON to false
	I0916 10:23:11.787652   12642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":332,"bootTime":1726481860,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:11.787744   12642 start.go:139] virtualization: kvm guest
	I0916 10:23:11.789971   12642 out.go:177] * [addons-821781] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:11.791581   12642 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:11.791602   12642 notify.go:220] Checking for updates...
	I0916 10:23:11.793279   12642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:11.794876   12642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:11.796234   12642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:23:11.797605   12642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:11.798881   12642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:11.800381   12642 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:11.822354   12642 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:11.822435   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.875294   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.865218731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.875392   12642 docker.go:318] overlay module found
	I0916 10:23:11.877179   12642 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:11.878539   12642 start.go:297] selected driver: docker
	I0916 10:23:11.878555   12642 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:11.878567   12642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:11.879376   12642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:11.928080   12642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:11.918595521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:11.928248   12642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:11.928460   12642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:11.930314   12642 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:11.931824   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:11.931880   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:11.931896   12642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:11.931970   12642 start.go:340] cluster config:
	{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:11.933478   12642 out.go:177] * Starting "addons-821781" primary control-plane node in "addons-821781" cluster
	I0916 10:23:11.934979   12642 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:23:11.936645   12642 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:11.938033   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:11.938077   12642 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:23:11.938086   12642 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:11.938151   12642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:11.938181   12642 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:11.938195   12642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:23:11.938528   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:11.938559   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json: {Name:mkb2d65543ac9e0f1211fb3bb619eaf59705ab34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:11.954455   12642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:11.954550   12642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:11.954565   12642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:11.954570   12642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:11.954578   12642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:11.954585   12642 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:24.468174   12642 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:24.468219   12642 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:24.468270   12642 start.go:360] acquireMachinesLock for addons-821781: {Name:mk2b69b21902e1a037d888f1a4c14b20c068c000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:24.468392   12642 start.go:364] duration metric: took 101µs to acquireMachinesLock for "addons-821781"
	I0916 10:23:24.468422   12642 start.go:93] Provisioning new machine with config: &{Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:24.468511   12642 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:24.470800   12642 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:24.471033   12642 start.go:159] libmachine.API.Create for "addons-821781" (driver="docker")
	I0916 10:23:24.471057   12642 client.go:168] LocalClient.Create starting
	I0916 10:23:24.471161   12642 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:23:24.563569   12642 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:23:24.843226   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:24.859906   12642 cli_runner.go:211] docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:24.859982   12642 network_create.go:284] running [docker network inspect addons-821781] to gather additional debugging logs...
	I0916 10:23:24.860006   12642 cli_runner.go:164] Run: docker network inspect addons-821781
	W0916 10:23:24.875695   12642 cli_runner.go:211] docker network inspect addons-821781 returned with exit code 1
	I0916 10:23:24.875725   12642 network_create.go:287] error running [docker network inspect addons-821781]: docker network inspect addons-821781: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-821781 not found
	I0916 10:23:24.875736   12642 network_create.go:289] output of [docker network inspect addons-821781]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-821781 not found
	
	** /stderr **
	I0916 10:23:24.875825   12642 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:24.892396   12642 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019c5ea0}
	I0916 10:23:24.892450   12642 network_create.go:124] attempt to create docker network addons-821781 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:24.892494   12642 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-821781 addons-821781
	I0916 10:23:24.956362   12642 network_create.go:108] docker network addons-821781 192.168.49.0/24 created
	I0916 10:23:24.956397   12642 kic.go:121] calculated static IP "192.168.49.2" for the "addons-821781" container
	I0916 10:23:24.956461   12642 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:24.972596   12642 cli_runner.go:164] Run: docker volume create addons-821781 --label name.minikube.sigs.k8s.io=addons-821781 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:24.991422   12642 oci.go:103] Successfully created a docker volume addons-821781
	I0916 10:23:24.991492   12642 cli_runner.go:164] Run: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:29.942508   12642 cli_runner.go:217] Completed: docker run --rm --name addons-821781-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --entrypoint /usr/bin/test -v addons-821781:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.950978249s)
	I0916 10:23:29.942530   12642 oci.go:107] Successfully prepared a docker volume addons-821781
	I0916 10:23:29.942541   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:29.942558   12642 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:29.942601   12642 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:34.358289   12642 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-821781:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.415644078s)
	I0916 10:23:34.358318   12642 kic.go:203] duration metric: took 4.415757339s to extract preloaded images to volume ...
	W0916 10:23:34.358449   12642 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:34.358539   12642 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:34.407126   12642 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-821781 --name addons-821781 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821781 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-821781 --network addons-821781 --ip 192.168.49.2 --volume addons-821781:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:34.740907   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Running}}
	I0916 10:23:34.761456   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:34.779743   12642 cli_runner.go:164] Run: docker exec addons-821781 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:34.825817   12642 oci.go:144] the created container "addons-821781" has a running status.
	I0916 10:23:34.825843   12642 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa...
	I0916 10:23:35.044132   12642 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:35.071224   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.090107   12642 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:35.090127   12642 kic_runner.go:114] Args: [docker exec --privileged addons-821781 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:35.145473   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:35.163175   12642 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:35.163257   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.181284   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.181510   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.181525   12642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:35.376812   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.376844   12642 ubuntu.go:169] provisioning hostname "addons-821781"
	I0916 10:23:35.376907   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.394400   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.394569   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.394582   12642 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-821781 && echo "addons-821781" | sudo tee /etc/hostname
	I0916 10:23:35.535760   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821781
	
	I0916 10:23:35.535841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.554208   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.554394   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.554410   12642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-821781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-821781/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-821781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:35.685491   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:35.685520   12642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:23:35.685538   12642 ubuntu.go:177] setting up certificates
	I0916 10:23:35.685549   12642 provision.go:84] configureAuth start
	I0916 10:23:35.685599   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:35.701932   12642 provision.go:143] copyHostCerts
	I0916 10:23:35.702012   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:23:35.702151   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:23:35.702230   12642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:23:35.702295   12642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.addons-821781 san=[127.0.0.1 192.168.49.2 addons-821781 localhost minikube]
	I0916 10:23:35.783034   12642 provision.go:177] copyRemoteCerts
	I0916 10:23:35.783097   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:35.783127   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.800161   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:35.893913   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:23:35.915296   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:23:35.937405   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:35.959050   12642 provision.go:87] duration metric: took 273.490922ms to configureAuth
	I0916 10:23:35.959082   12642 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:35.959246   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:35.959337   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:35.977055   12642 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:35.977247   12642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:35.977264   12642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:23:36.194829   12642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:23:36.194851   12642 machine.go:96] duration metric: took 1.031655385s to provisionDockerMachine
	I0916 10:23:36.194860   12642 client.go:171] duration metric: took 11.723797841s to LocalClient.Create
	I0916 10:23:36.194875   12642 start.go:167] duration metric: took 11.723845183s to libmachine.API.Create "addons-821781"
	I0916 10:23:36.194883   12642 start.go:293] postStartSetup for "addons-821781" (driver="docker")
	I0916 10:23:36.194895   12642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:36.194953   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:36.194987   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.212136   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.306296   12642 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:36.309608   12642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:36.309638   12642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:36.309646   12642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:36.309652   12642 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:36.309662   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:23:36.309721   12642 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:23:36.309744   12642 start.go:296] duration metric: took 114.855265ms for postStartSetup
	I0916 10:23:36.310017   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.326531   12642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/config.json ...
	I0916 10:23:36.326849   12642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:36.326901   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.343127   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.434151   12642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:36.438063   12642 start.go:128] duration metric: took 11.969538805s to createHost
	I0916 10:23:36.438087   12642 start.go:83] releasing machines lock for "addons-821781", held for 11.96968194s
	I0916 10:23:36.438170   12642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821781
	I0916 10:23:36.454099   12642 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:36.454144   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.454204   12642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:36.454276   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:36.472027   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.473599   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:36.640610   12642 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:36.644626   12642 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:23:36.780722   12642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:36.785109   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.802933   12642 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:36.803016   12642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:36.830084   12642 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:36.830106   12642 start.go:495] detecting cgroup driver to use...
	I0916 10:23:36.830135   12642 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:36.830178   12642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:23:36.843678   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:23:36.854207   12642 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:36.854255   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:36.867323   12642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:36.880430   12642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:36.955777   12642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:37.035979   12642 docker.go:233] disabling docker service ...
	I0916 10:23:37.036049   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:37.052780   12642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:37.063200   12642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:37.138165   12642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:37.215004   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:37.225051   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:37.239114   12642 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:23:37.239176   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.248375   12642 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:23:37.248431   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.257180   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.265957   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.274955   12642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:37.283271   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.291833   12642 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.305478   12642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:23:37.314242   12642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:37.321530   12642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:37.328860   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.397743   12642 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:23:37.494696   12642 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:23:37.494784   12642 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:23:37.498069   12642 start.go:563] Will wait 60s for crictl version
	I0916 10:23:37.498121   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:23:37.501763   12642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:37.533845   12642 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:23:37.533971   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.568210   12642 ssh_runner.go:195] Run: crio --version
	I0916 10:23:37.602768   12642 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:23:37.604266   12642 cli_runner.go:164] Run: docker network inspect addons-821781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:37.620164   12642 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:37.623594   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.633351   12642 kubeadm.go:883] updating cluster {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:37.633481   12642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:37.633537   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.691488   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.691513   12642 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:23:37.691557   12642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:37.721834   12642 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:23:37.721855   12642 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:37.721863   12642 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:23:37.721943   12642 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-821781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:37.722004   12642 ssh_runner.go:195] Run: crio config
	I0916 10:23:37.761799   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:37.761826   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:37.761837   12642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:37.761858   12642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-821781 NodeName:addons-821781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:37.761998   12642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-821781"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:37.762053   12642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:37.770243   12642 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:37.770305   12642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:37.778774   12642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:23:37.794482   12642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:37.810783   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0916 10:23:37.827097   12642 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:37.830351   12642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:37.840395   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:37.914798   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:37.926573   12642 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781 for IP: 192.168.49.2
	I0916 10:23:37.926602   12642 certs.go:194] generating shared ca certs ...
	I0916 10:23:37.926624   12642 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:37.926767   12642 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:23:38.165524   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt ...
	I0916 10:23:38.165552   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt: {Name:mk958b9d7b4e596cca12a43812b033701a1808ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165715   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key ...
	I0916 10:23:38.165727   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key: {Name:mk218c15b5e68b365653a5a88f283b4fd2a63397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.165796   12642 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:23:38.317748   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt ...
	I0916 10:23:38.317782   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt: {Name:mke289e24f4d60c196cc49c14787f9db71cc62b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.317972   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key ...
	I0916 10:23:38.317984   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key: {Name:mk238a3132478eab5de811cbc3626e41ad1154f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.318059   12642 certs.go:256] generating profile certs ...
	I0916 10:23:38.318110   12642 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key
	I0916 10:23:38.318136   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt with IP's: []
	I0916 10:23:38.579861   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt ...
	I0916 10:23:38.579894   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: {Name:mk21e84efd5822ab69a95d39a845706a794c0061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580087   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key ...
	I0916 10:23:38.580102   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.key: {Name:mkafbaeecfaf57db916f1469c60f36a7c0603c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.580202   12642 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e
	I0916 10:23:38.580226   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:38.661523   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e ...
	I0916 10:23:38.661551   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e: {Name:mk3603fd200d1d0c9c664f1f9e2d3f37d0da819e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661721   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e ...
	I0916 10:23:38.661734   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e: {Name:mk979e39754dc7623208af4e4f8346a3268b5e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.661802   12642 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt
	I0916 10:23:38.661872   12642 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key.ea38456e -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key
	I0916 10:23:38.661916   12642 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key
	I0916 10:23:38.661934   12642 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt with IP's: []
	I0916 10:23:38.868848   12642 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt ...
	I0916 10:23:38.868882   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt: {Name:mk60143e6be001872095f4a07cc8800f3883cb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869061   12642 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key ...
	I0916 10:23:38.869072   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key: {Name:mkfcb902307b78d6d49e6123539922887bdc7bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:38.869254   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:38.869291   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:23:38.869321   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:38.869365   12642 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:38.869947   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:38.891875   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:38.913044   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:38.935301   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:38.957638   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:38.978769   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:38.999283   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:39.020509   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:39.041006   12642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:39.062022   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:39.077689   12642 ssh_runner.go:195] Run: openssl version
	I0916 10:23:39.082828   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:39.091794   12642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094851   12642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.094909   12642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:39.101357   12642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:39.110237   12642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:39.113275   12642 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:39.113343   12642 kubeadm.go:392] StartCluster: {Name:addons-821781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-821781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:39.113424   12642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:39.113461   12642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:39.147213   12642 cri.go:89] found id: ""
	I0916 10:23:39.147277   12642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:39.155102   12642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:39.162655   12642 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:39.162713   12642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:39.170269   12642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:39.170287   12642 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:39.170331   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:39.177944   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:39.178006   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:39.185617   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:39.193448   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:39.193494   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:39.201778   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.209504   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:39.209560   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:39.217167   12642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:39.224794   12642 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:39.224851   12642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:39.232091   12642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:39.267943   12642 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:39.268041   12642 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:39.285854   12642 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:39.285924   12642 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:39.285968   12642 kubeadm.go:310] OS: Linux
	I0916 10:23:39.286011   12642 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:39.286080   12642 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:39.286143   12642 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:39.286205   12642 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:39.286307   12642 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:39.286389   12642 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:39.286430   12642 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:39.286498   12642 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:39.286566   12642 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:39.334020   12642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:39.334137   12642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:39.334277   12642 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:39.339811   12642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:39.342965   12642 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:39.343081   12642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:39.343174   12642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:39.501471   12642 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:39.656891   12642 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:39.803369   12642 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:39.956554   12642 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:40.122217   12642 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:40.122346   12642 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.178788   12642 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:40.178946   12642 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-821781 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:40.253274   12642 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:40.444072   12642 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:40.539814   12642 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:40.539908   12642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:40.740107   12642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:40.805609   12642 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:41.114974   12642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:41.183175   12642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:41.287722   12642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:41.288131   12642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:41.290675   12642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:41.293432   12642 out.go:235]   - Booting up control plane ...
	I0916 10:23:41.293554   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:41.293636   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:41.293726   12642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:41.302536   12642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:41.307914   12642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:41.307975   12642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:41.387469   12642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:41.387659   12642 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:41.889098   12642 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.704632ms
	I0916 10:23:41.889216   12642 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:46.391264   12642 kubeadm.go:310] [api-check] The API server is healthy after 4.502175176s
	I0916 10:23:46.402989   12642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:46.412298   12642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:46.429664   12642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:46.429953   12642 kubeadm.go:310] [mark-control-plane] Marking the node addons-821781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:46.439045   12642 kubeadm.go:310] [bootstrap-token] Using token: 08e8kf.82j5psgo1mt86ygt
	I0916 10:23:46.440988   12642 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:46.441118   12642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:46.443591   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:46.448741   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:46.451033   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:46.453482   12642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:46.457052   12642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:46.798062   12642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:47.220263   12642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:47.797780   12642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:47.798623   12642 kubeadm.go:310] 
	I0916 10:23:47.798710   12642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:47.798722   12642 kubeadm.go:310] 
	I0916 10:23:47.798838   12642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:47.798858   12642 kubeadm.go:310] 
	I0916 10:23:47.798897   12642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:47.798955   12642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:47.799030   12642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:47.799050   12642 kubeadm.go:310] 
	I0916 10:23:47.799117   12642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:47.799125   12642 kubeadm.go:310] 
	I0916 10:23:47.799191   12642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:47.799202   12642 kubeadm.go:310] 
	I0916 10:23:47.799273   12642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:47.799371   12642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:47.799433   12642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:47.799458   12642 kubeadm.go:310] 
	I0916 10:23:47.799618   12642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:47.799702   12642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:47.799727   12642 kubeadm.go:310] 
	I0916 10:23:47.799855   12642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800005   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:23:47.800028   12642 kubeadm.go:310] 	--control-plane 
	I0916 10:23:47.800034   12642 kubeadm.go:310] 
	I0916 10:23:47.800137   12642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:47.800147   12642 kubeadm.go:310] 
	I0916 10:23:47.800244   12642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 08e8kf.82j5psgo1mt86ygt \
	I0916 10:23:47.800384   12642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:23:47.802505   12642 kubeadm.go:310] W0916 10:23:39.265300    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.802965   12642 kubeadm.go:310] W0916 10:23:39.265967    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:47.803297   12642 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:47.803488   12642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:47.803508   12642 cni.go:84] Creating CNI manager for ""
	I0916 10:23:47.803517   12642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:23:47.805594   12642 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:47.806930   12642 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:47.811723   12642 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:47.811744   12642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:47.829314   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:48.045373   12642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:48.045433   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.045434   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-821781 minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-821781 minikube.k8s.io/primary=true
	I0916 10:23:48.053143   12642 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:48.121750   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:48.622580   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.121829   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:49.622144   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.122640   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:50.622473   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.122549   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:51.622693   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.122279   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.622129   12642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.815735   12642 kubeadm.go:1113] duration metric: took 4.770357411s to wait for elevateKubeSystemPrivileges
	I0916 10:23:52.815769   12642 kubeadm.go:394] duration metric: took 13.702442151s to StartCluster
	I0916 10:23:52.815790   12642 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.815914   12642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:23:52.816324   12642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:52.816539   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:52.816545   12642 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:23:52.816616   12642 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:52.816735   12642 addons.go:69] Setting yakd=true in profile "addons-821781"
	I0916 10:23:52.816749   12642 addons.go:69] Setting ingress-dns=true in profile "addons-821781"
	I0916 10:23:52.816756   12642 addons.go:69] Setting default-storageclass=true in profile "addons-821781"
	I0916 10:23:52.816766   12642 addons.go:69] Setting inspektor-gadget=true in profile "addons-821781"
	I0916 10:23:52.816771   12642 addons.go:234] Setting addon ingress-dns=true in "addons-821781"
	I0916 10:23:52.816777   12642 addons.go:234] Setting addon inspektor-gadget=true in "addons-821781"
	I0916 10:23:52.816781   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.816788   12642 addons.go:69] Setting cloud-spanner=true in profile "addons-821781"
	I0916 10:23:52.816798   12642 addons.go:234] Setting addon cloud-spanner=true in "addons-821781"
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816821   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816815   12642 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-821781"
	I0916 10:23:52.816831   12642 addons.go:69] Setting volumesnapshots=true in profile "addons-821781"
	I0916 10:23:52.816846   12642 addons.go:234] Setting addon volumesnapshots=true in "addons-821781"
	I0916 10:23:52.816852   12642 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-821781"
	I0916 10:23:52.816859   12642 addons.go:69] Setting gcp-auth=true in profile "addons-821781"
	I0916 10:23:52.816864   12642 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-821781"
	I0916 10:23:52.816869   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816875   12642 mustload.go:65] Loading cluster: addons-821781
	I0916 10:23:52.816879   12642 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:52.816885   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816897   12642 addons.go:69] Setting ingress=true in profile "addons-821781"
	I0916 10:23:52.816908   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816914   12642 addons.go:234] Setting addon ingress=true in "addons-821781"
	I0916 10:23:52.816821   12642 addons.go:69] Setting storage-provisioner=true in profile "addons-821781"
	I0916 10:23:52.816951   12642 addons.go:234] Setting addon storage-provisioner=true in "addons-821781"
	I0916 10:23:52.816952   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816967   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816991   12642 config.go:182] Loaded profile config "addons-821781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:23:52.817237   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817375   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816847   12642 addons.go:69] Setting helm-tiller=true in profile "addons-821781"
	I0916 10:23:52.817387   12642 addons.go:69] Setting registry=true in profile "addons-821781"
	I0916 10:23:52.817393   12642 addons.go:234] Setting addon helm-tiller=true in "addons-821781"
	I0916 10:23:52.817398   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817399   12642 addons.go:234] Setting addon registry=true in "addons-821781"
	I0916 10:23:52.817413   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817421   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.817453   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817460   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817835   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817839   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.818548   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816758   12642 addons.go:234] Setting addon yakd=true in "addons-821781"
	I0916 10:23:52.818812   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816813   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816831   12642 addons.go:69] Setting metrics-server=true in profile "addons-821781"
	I0916 10:23:52.819624   12642 addons.go:234] Setting addon metrics-server=true in "addons-821781"
	I0916 10:23:52.819661   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.816777   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-821781"
	I0916 10:23:52.820048   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820121   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.820925   12642 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:52.817377   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.823819   12642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:52.819369   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.817378   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816830   12642 addons.go:69] Setting volcano=true in profile "addons-821781"
	I0916 10:23:52.827260   12642 addons.go:234] Setting addon volcano=true in "addons-821781"
	I0916 10:23:52.827341   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.827903   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.816822   12642 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-821781"
	I0916 10:23:52.828667   12642 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-821781"
	I0916 10:23:52.846468   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.849708   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.849779   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.858180   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:52.860117   12642 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:52.861491   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:52.861515   12642 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:52.861580   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.861792   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:52.863536   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:52.865265   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:52.868592   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:52.871812   12642 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:52.873467   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:52.873491   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:52.873553   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.873826   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:52.875500   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:52.876891   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:52.878274   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:52.878295   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:52.878358   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.885380   12642 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:52.887180   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:52.887200   12642 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:52.887253   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.887590   12642 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:52.889278   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:52.889293   12642 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:52.891126   12642 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:52.891146   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:52.891207   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.891375   12642 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:52.893052   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.893213   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:52.893225   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:52.893284   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.895906   12642 addons.go:234] Setting addon default-storageclass=true in "addons-821781"
	I0916 10:23:52.895950   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.896395   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.902602   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:52.904755   12642 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:52.904779   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:52.904841   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.913208   12642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:52.916490   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:52.916516   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:52.916578   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.920102   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.921373   12642 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:52.924287   12642 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:52.924310   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:52.924367   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.924567   12642 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:52.924966   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.927248   12642 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:52.927271   12642 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:52.927324   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	W0916 10:23:52.939182   12642 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:23:52.945562   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.947311   12642 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:52.949640   12642 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:52.949813   12642 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:52.949828   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:52.949883   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.950915   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:52.950951   12642 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:52.951010   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.967061   12642 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-821781"
	I0916 10:23:52.967112   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:23:52.967600   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:23:52.976558   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.977128   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979407   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979587   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.979666   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982295   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.982301   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984209   12642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:52.984228   12642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:52.984267   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.984282   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:52.985867   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:52.992433   12642 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:52.996036   12642 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:52.998876   12642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:52.998899   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:52.998966   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:23:53.007398   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.031542   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:23:53.198285   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:53.222232   12642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:53.223607   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:53.303303   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:53.303391   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:53.412003   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:53.494460   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:53.495317   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:53.495388   12642 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:53.500279   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:53.500366   12642 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:53.518431   12642 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:53.518460   12642 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:53.595357   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:53.595389   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:53.595502   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:53.595520   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:53.601235   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:53.601265   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:53.603514   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:53.610819   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:53.613851   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:53.696891   12642 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:53.696920   12642 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:53.697186   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:53.711949   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:53.711981   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:53.793955   12642 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:53.794047   12642 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:53.795627   12642 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:53.795652   12642 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:53.810579   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:53.810623   12642 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:53.818121   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:53.818143   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:54.008884   12642 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:54.008915   12642 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:54.097416   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:54.097502   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:54.105048   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:54.114541   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:54.116113   12642 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.116175   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:54.194093   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:54.194181   12642 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:54.310015   12642 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:54.310107   12642 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:54.315950   12642 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:54.316029   12642 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:54.409828   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:54.595664   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:54.595750   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:54.795049   12642 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:54.795131   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:54.795986   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:54.796042   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:54.798857   12642 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.60047423s)
	I0916 10:23:54.798970   12642 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:54.798946   12642 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.576635993s)
	I0916 10:23:54.799977   12642 node_ready.go:35] waiting up to 6m0s for node "addons-821781" to be "Ready" ...
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:54.816489   12642 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:54.816462   12642 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:54.816544   12642 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:55.096307   12642 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:55.096398   12642 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:55.098163   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:55.303720   12642 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:55.303802   12642 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:55.310866   12642 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:55.310939   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:55.509740   12642 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-821781" context rescaled to 1 replicas
	I0916 10:23:55.603909   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:55.603992   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:55.609116   12642 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:55.609197   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:55.701381   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:56.095470   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:56.095499   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:56.106357   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:56.115945   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.892303376s)
	I0916 10:23:56.209795   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:56.209873   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:56.410426   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:56.410515   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:56.511332   12642 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.511408   12642 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:56.813818   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:56.895029   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:58.497986   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.085861545s)
	I0916 10:23:58.498185   12642 addons.go:475] Verifying addon ingress=true in "addons-821781"
	I0916 10:23:58.498214   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.894594589s)
	I0916 10:23:58.498365   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.801136889s)
	I0916 10:23:58.498429   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.393306067s)
	I0916 10:23:58.498499   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.383877389s)
	I0916 10:23:58.498516   12642 addons.go:475] Verifying addon metrics-server=true in "addons-821781"
	I0916 10:23:58.498551   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.08869279s)
	I0916 10:23:58.498561   12642 addons.go:475] Verifying addon registry=true in "addons-821781"
	I0916 10:23:58.498687   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.40044143s)
	I0916 10:23:58.498148   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.003579441s)
	I0916 10:23:58.498265   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.887343223s)
	I0916 10:23:58.498721   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.884394452s)
	I0916 10:23:58.500166   12642 out.go:177] * Verifying registry addon...
	I0916 10:23:58.500186   12642 out.go:177] * Verifying ingress addon...
	I0916 10:23:58.500168   12642 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-821781 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:58.502840   12642 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:23:58.502984   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0916 10:23:58.505976   12642 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:23:58.508066   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:58.508081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:58.508299   12642 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:23:58.508315   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.012329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.110843   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.299182   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.597694462s)
	W0916 10:23:59.299228   12642 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299250   12642 retry.go:31] will retry after 144.288551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:59.299277   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.19282086s)
	I0916 10:23:59.305158   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:23:59.444238   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.506924   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:59.507806   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.539307   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.725399907s)
	I0916 10:23:59.539335   12642 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-821781"
	I0916 10:23:59.541718   12642 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:59.543660   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:59.597366   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:59.597452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.006951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.007539   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.096393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.099134   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:00.099205   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.125424   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.418412   12642 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:00.508361   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:00.509838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.518754   12642 addons.go:234] Setting addon gcp-auth=true in "addons-821781"
	I0916 10:24:00.518809   12642 host.go:66] Checking if "addons-821781" exists ...
	I0916 10:24:00.519365   12642 cli_runner.go:164] Run: docker container inspect addons-821781 --format={{.State.Status}}
	I0916 10:24:00.536851   12642 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:00.536902   12642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821781
	I0916 10:24:00.553493   12642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/addons-821781/id_rsa Username:docker}
	I0916 10:24:00.596428   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.006170   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.006803   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.047121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.506287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:01.506534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.547185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.805560   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:02.007448   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.008038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.046600   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.202834   12642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.758545356s)
	I0916 10:24:02.202854   12642 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.665973141s)
	I0916 10:24:02.205053   12642 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:02.206664   12642 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:02.208283   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:02.208296   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:02.226305   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:02.226333   12642 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:02.244167   12642 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.244187   12642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:02.298853   12642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:02.506489   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:02.506968   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.547297   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.899621   12642 addons.go:475] Verifying addon gcp-auth=true in "addons-821781"
	I0916 10:24:02.901591   12642 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:02.904224   12642 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:02.907029   12642 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:02.907051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.007207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.007880   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.047134   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.407111   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.506509   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:03.507075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.547522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.907027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.007265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.007643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.046594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.303245   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:04.407879   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.506365   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:04.506939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.547412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.907817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.006397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.007232   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.047038   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.407918   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.506892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:05.507154   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.547266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.907671   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.006358   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.006625   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.046717   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.407766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.506364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:06.506750   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.547000   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.803631   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:06.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.006037   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.006551   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.046971   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.407314   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.506338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.547256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.907021   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.005785   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.006334   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.046439   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.408357   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.505952   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.506643   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.547247   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.803661   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:08.907343   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.006189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.046966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.407657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.506182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.506608   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.546942   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.907283   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.005977   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.006337   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.046685   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.408104   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.507241   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.547393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.907115   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.005778   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.006115   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.047296   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.302797   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:11.407398   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.506075   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.506794   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.546885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.907330   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.006053   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.046997   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.407912   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.506528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.507006   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.547228   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.907413   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.006062   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.006437   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.303472   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:13.407845   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.506423   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.547162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.907106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.005737   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.006410   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.047326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.407189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.505915   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.506316   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.547399   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.907535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.007080   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.046972   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.407693   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.506219   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.506709   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.547052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.803455   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:15.907823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.006647   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.007106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.047456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.407960   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.506331   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.506765   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.547157   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.907551   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.006299   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.006617   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.047040   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.406899   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.506449   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.506938   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.547210   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.907861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.006488   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.006990   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.046795   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.303390   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:18.408194   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.505660   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.506075   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.547467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.908947   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.006658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.007120   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.047574   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.407694   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.506237   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.506764   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.546743   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.907775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.006250   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.006926   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.046950   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.407914   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.506444   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.506893   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.547165   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.802891   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:20.908266   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.006168   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.006661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.046763   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.407620   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.506280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.506758   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.547207   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.907808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.006390   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.006832   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.047258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.407294   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.506192   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.506573   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.546892   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.803612   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:22.907631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.006412   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.006789   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.407703   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.506242   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.506922   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.546531   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.907989   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.006557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.007064   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.047256   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.407245   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.506027   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.506326   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.546265   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.907143   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.006149   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.006574   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.046726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.303085   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:25.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.506502   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.506958   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.549041   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.907130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.005689   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.006094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.047573   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.407949   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.506465   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.506873   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.547130   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.907930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.006498   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.006899   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.047132   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.303541   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:27.407076   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.505560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.506083   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.547418   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.907322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.006007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.006289   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.046769   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.506106   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.506493   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.547121   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.907052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.005692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.006125   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.047636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.407566   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.506440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.506780   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.547158   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.802646   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:29.907185   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.005875   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.006320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.046391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.407344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.505998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.506431   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.546833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.006755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.007344   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.047565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.407650   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.506485   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.506906   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.547281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.803334   12642 node_ready.go:53] node "addons-821781" has status "Ready":"False"
	I0916 10:24:31.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.006411   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.006716   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.047171   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.407108   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.505792   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.506357   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.547493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.907787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.006393   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.007161   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.047511   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.407346   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.506125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.506509   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.547645   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.803187   12642 node_ready.go:49] node "addons-821781" has status "Ready":"True"
	I0916 10:24:33.803213   12642 node_ready.go:38] duration metric: took 39.003174602s for node "addons-821781" to be "Ready" ...
	I0916 10:24:33.803225   12642 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:33.970599   12642 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:34.069001   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.088106   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.088355   12642 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:34.088380   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.088736   12642 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:34.088757   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.407852   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.508926   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.509671   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.609806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.907890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.006456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.006807   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.047745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.407857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.476382   12642 pod_ready.go:93] pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.476406   12642 pod_ready.go:82] duration metric: took 1.50577246s for pod "coredns-7c65d6cfc9-f6b44" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.476429   12642 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480336   12642 pod_ready.go:93] pod "etcd-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.480359   12642 pod_ready.go:82] duration metric: took 3.921757ms for pod "etcd-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.480374   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484379   12642 pod_ready.go:93] pod "kube-apiserver-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.484399   12642 pod_ready.go:82] duration metric: took 4.01835ms for pod "kube-apiserver-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.484407   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488483   12642 pod_ready.go:93] pod "kube-controller-manager-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.488502   12642 pod_ready.go:82] duration metric: took 4.089026ms for pod "kube-controller-manager-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.488513   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492259   12642 pod_ready.go:93] pod "kube-proxy-7grrw" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.492277   12642 pod_ready.go:82] duration metric: took 3.758267ms for pod "kube-proxy-7grrw" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.492286   12642 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.508978   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.509276   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.548257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.875363   12642 pod_ready.go:93] pod "kube-scheduler-addons-821781" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:35.875387   12642 pod_ready.go:82] duration metric: took 383.093988ms for pod "kube-scheduler-addons-821781" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.875399   12642 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:35.907718   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.006857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.047708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.407759   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.506231   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.506532   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.547623   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.908178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.009196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.009613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.111822   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.408212   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.507815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.508955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.597930   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.899332   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.907966   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.007593   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.007941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.096688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.407803   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.507008   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.507185   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.548820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.912820   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.007788   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.007812   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.048263   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.407800   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.506945   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.507715   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.548866   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.908787   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.007032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.007632   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.048796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.398719   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:40.407487   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.507397   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.507772   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.548227   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.908344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.009557   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.009817   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.048882   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.407443   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.507386   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.507614   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.547783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.907344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.006438   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.006755   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.047817   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.407604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.506506   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.506862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.548258   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.880576   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:42.907125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.006570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.006955   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.048271   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.407864   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.507257   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.507492   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.548688   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.907268   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.006139   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.006358   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.048808   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.408058   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.506983   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.507322   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.548244   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.907777   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.007224   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.007575   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.048360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.381456   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:45.408061   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.507492   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.507642   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.548176   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.907279   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.006236   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.006567   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.047499   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.407829   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.507175   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.507613   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.549215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.908356   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.007293   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.007559   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.098016   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.398953   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:47.408142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.507848   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.508575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.597783   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.907504   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.006545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.007094   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.047872   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.408467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.506796   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.507040   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.548302   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.907911   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.007377   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.007799   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.048150   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.407649   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.506584   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.507145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.548392   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.881772   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:49.907684   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.006877   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.007616   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.048576   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.408384   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.509092   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.509234   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.548191   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.907565   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.008280   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.008548   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.048447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.407510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.506404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.547570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.900427   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:51.908013   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.008311   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.009178   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.098159   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.407616   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.506895   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.507402   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.548326   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.907362   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.008415   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.009033   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.110477   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.408669   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.508937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.509320   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.548259   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.907440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.006459   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.006703   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.047766   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.381253   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:54.408025   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.506984   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.548500   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.907545   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.007055   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.007267   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.048307   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.407381   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.506329   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.506924   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.547861   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.907031   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.007920   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.048290   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.407755   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.508288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.508534   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.547447   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.880835   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:56.907604   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.008980   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.009246   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.048404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.408337   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.506591   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.506714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.547844   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.907931   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.007018   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.007364   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.048745   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.407890   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.506768   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.507350   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.548030   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.883327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:58.908144   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.008937   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.010047   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.048751   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.407088   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.507067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.507939   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.597408   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.907493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.006520   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.006934   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.407658   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.506801   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.507503   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.548304   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.908137   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.007637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.007838   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.048049   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.381960   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:01.407780   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.506951   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.507128   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.549865   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.908484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.009640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.009714   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.047344   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.407125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.506639   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.547791   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.908024   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.007189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.007861   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.048215   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.408697   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.509655   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.509879   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.547998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.881604   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:03.907142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.006400   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.006547   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.047579   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.407594   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.509746   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.510002   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.547819   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.907345   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.006657   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.006921   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.048328   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.407535   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.506637   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.506876   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.548360   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.881794   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:05.907547   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.006578   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.007101   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.047920   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.408051   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.506012   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.506238   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.548610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.907726   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.006786   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.007057   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.048484   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.407806   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:07.506692   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.506986   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.548007   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.907772   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.006701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.006970   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.047834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.394559   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:08.408017   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:08.507156   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.507728   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.597758   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.907919   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.007475   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.007661   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.098454   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.408318   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:09.509364   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.510773   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.598483   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.908201   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.008441   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.009850   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.102292   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.398327   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:10.408466   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:10.507500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.507925   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.548323   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.907708   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.006815   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.008091   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.047722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.407736   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:11.507196   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.507427   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.599680   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.907752   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.007430   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.007699   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.047776   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.407516   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:12.506452   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.506628   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.550195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.880927   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:12.907727   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.007178   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.007457   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.407946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:13.507322   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.507501   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.547784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.908011   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.007871   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.008085   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.049162   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.407342   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:14.506366   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.507489   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.597388   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.881914   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:14.907833   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.007276   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.008484   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.097577   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.407927   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:15.507867   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.508145   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.548701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.909823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.012269   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.012490   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.112080   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.407823   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:16.506640   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.507038   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.547677   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.908338   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.006229   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.006500   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.047433   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.380841   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:17.408141   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:17.507281   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.507422   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.548306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.908216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.005946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.006253   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.048471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.407630   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:18.506857   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.507586   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.547722   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.908142   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.007287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.007657   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.048873   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.399218   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:19.408522   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:19.506838   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.506974   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.548754   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.907508   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.006666   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.007738   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.096885   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.407683   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:20.507079   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.507594   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.549277   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.938821   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.007125   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.007361   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.049052   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.408461   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:21.506721   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.507045   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.548148   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.881149   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:21.907701   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.007091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.007530   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.108828   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.408067   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:22.507251   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.507505   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.549744   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.908512   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.006557   12642 kapi.go:107] duration metric: took 1m24.503572468s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:23.007211   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.050575   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.408216   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:23.507222   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.548029   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.881704   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:23.907636   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.006951   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.048091   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.407560   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:24.506856   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.548705   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.907750   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.006941   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.048097   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.408473   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:25.507086   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.548651   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.907834   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.007469   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.048617   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.415775   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:26.417875   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:26.507746   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.549493   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.908404   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.009635   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.048391   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.408105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:27.509068   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.548222   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.908042   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.007883   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.047932   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.408370   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:28.507379   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.548467   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.898654   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:28.907039   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.007310   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.048105   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.407790   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:29.507440   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.598195   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.907810   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.007961   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.047756   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.407748   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:30.507308   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.548456   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.908206   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.007623   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.048306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.380691   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:31.407719   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:31.506896   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.547878   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.907840   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.007212   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.048133   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.407238   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:32.506798   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.548528   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.907455   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.006747   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.047570   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.381514   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:33.408306   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:33.506478   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.548374   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.907944   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.007347   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.048784   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.408200   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:34.506244   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.548189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.907539   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.006862   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.049282   12642 kapi.go:107] duration metric: took 1m35.505619997s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:25:35.407599   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:35.506942   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.881121   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:35.907998   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.007303   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.407476   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:36.506940   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.907288   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.006647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.408081   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:37.507464   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.908184   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.007201   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.381474   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:38.407986   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:38.508647   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.908946   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.008435   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.408471   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:39.510473   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.995610   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.008869   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.397632   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:40.408032   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:40.509659   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.907933   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.007031   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.408056   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:41.508041   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.908287   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.006885   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.407440   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:42.506800   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.880849   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:42.907379   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.008348   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.408661   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:43.506952   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.907189   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.006692   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.407965   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:44.507074   12642 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.908416   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.006411   12642 kapi.go:107] duration metric: took 1m46.503572843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:45.381179   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:45.459019   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:45.907457   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.408510   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:46.907182   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.396594   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:47.407631   12642 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:47.908030   12642 kapi.go:107] duration metric: took 1m45.003803312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:47.909696   12642 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-821781 cluster.
	I0916 10:25:47.911374   12642 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:47.913470   12642 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:47.915138   12642 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, helm-tiller, metrics-server, storage-provisioner, cloud-spanner, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:47.916678   12642 addons.go:510] duration metric: took 1m55.100061322s for enable addons: enabled=[ingress-dns nvidia-device-plugin helm-tiller metrics-server storage-provisioner cloud-spanner yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:49.881225   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:52.381442   12642 pod_ready.go:103] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:25:54.380287   12642 pod_ready.go:93] pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.380308   12642 pod_ready.go:82] duration metric: took 1m18.504902601s for pod "metrics-server-84c5f94fbc-t6sfx" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.380318   12642 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384430   12642 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace has status "Ready":"True"
	I0916 10:25:54.384450   12642 pod_ready.go:82] duration metric: took 4.126025ms for pod "nvidia-device-plugin-daemonset-fs477" in "kube-system" namespace to be "Ready" ...
	I0916 10:25:54.384468   12642 pod_ready.go:39] duration metric: took 1m20.581229133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:25:54.384485   12642 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:25:54.384513   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:54.384564   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:54.417384   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.417411   12642 cri.go:89] found id: ""
	I0916 10:25:54.417421   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:54.417476   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.420785   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:54.420839   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:54.452868   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.452890   12642 cri.go:89] found id: ""
	I0916 10:25:54.452898   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:54.452950   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.456066   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:54.456119   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:54.487907   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:54.487930   12642 cri.go:89] found id: ""
	I0916 10:25:54.487938   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:54.487992   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.491215   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:54.491266   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:54.523745   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.523766   12642 cri.go:89] found id: ""
	I0916 10:25:54.523775   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:54.523831   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.527161   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:54.527229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:54.560095   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.560123   12642 cri.go:89] found id: ""
	I0916 10:25:54.560133   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:54.560180   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.563529   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:54.563589   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:54.596576   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:54.596600   12642 cri.go:89] found id: ""
	I0916 10:25:54.596608   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:54.596655   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.599825   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:54.599906   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:54.632507   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:54.632531   12642 cri.go:89] found id: ""
	I0916 10:25:54.632539   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:54.632620   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:54.635882   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:54.635906   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:54.698451   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:54.698492   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:54.799766   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:54.799797   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:54.843933   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:54.843963   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:54.894142   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:54.894174   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:54.934257   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:54.934288   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:54.967135   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:54.967163   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:55.001104   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:55.001133   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:55.013631   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:55.013663   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:55.047469   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:55.047499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:55.106750   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:55.106787   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:55.182277   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:55.182324   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:57.726595   12642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:25:57.740119   12642 api_server.go:72] duration metric: took 2m4.923540882s to wait for apiserver process to appear ...
	I0916 10:25:57.740152   12642 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:25:57.740187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:25:57.740229   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:25:57.772533   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:57.772558   12642 cri.go:89] found id: ""
	I0916 10:25:57.772566   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:25:57.772615   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.775778   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:25:57.775838   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:25:57.813245   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:57.813271   12642 cri.go:89] found id: ""
	I0916 10:25:57.813281   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:25:57.813354   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.817691   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:25:57.817769   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:25:57.851306   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:57.851328   12642 cri.go:89] found id: ""
	I0916 10:25:57.851335   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:25:57.851378   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.854640   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:25:57.854706   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:25:57.904175   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:57.904198   12642 cri.go:89] found id: ""
	I0916 10:25:57.904205   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:25:57.904252   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.907938   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:25:57.907996   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:25:57.941402   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:57.941421   12642 cri.go:89] found id: ""
	I0916 10:25:57.941428   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:25:57.941481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.944741   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:25:57.944796   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:25:57.979020   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:57.979042   12642 cri.go:89] found id: ""
	I0916 10:25:57.979051   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:25:57.979108   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:57.982381   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:25:57.982431   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:25:58.014858   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:25:58.014881   12642 cri.go:89] found id: ""
	I0916 10:25:58.014890   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:25:58.014937   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:25:58.018251   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:25:58.018272   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:25:58.050812   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:25:58.050847   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:25:58.108286   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:25:58.108318   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:25:58.182964   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:25:58.183002   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:25:58.248089   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:25:58.248126   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:25:58.260293   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:25:58.260339   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:25:58.355509   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:25:58.355535   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:25:58.398314   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:25:58.398350   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:25:58.445703   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:25:58.445736   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:25:58.485997   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:25:58.486025   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:25:58.519971   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:25:58.519998   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:25:58.558470   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:25:58.558499   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.092930   12642 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:26:01.096706   12642 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:26:01.097615   12642 api_server.go:141] control plane version: v1.31.1
	I0916 10:26:01.097635   12642 api_server.go:131] duration metric: took 3.357476241s to wait for apiserver health ...
	I0916 10:26:01.097642   12642 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:26:01.097662   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:26:01.097709   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:26:01.131450   12642 cri.go:89] found id: "f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.131477   12642 cri.go:89] found id: ""
	I0916 10:26:01.131489   12642 logs.go:276] 1 containers: [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7]
	I0916 10:26:01.131542   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.134752   12642 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:26:01.134813   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:26:01.166978   12642 cri.go:89] found id: "aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.167002   12642 cri.go:89] found id: ""
	I0916 10:26:01.167014   12642 logs.go:276] 1 containers: [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e]
	I0916 10:26:01.167057   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.170770   12642 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:26:01.170821   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:26:01.203544   12642 cri.go:89] found id: "5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.203564   12642 cri.go:89] found id: ""
	I0916 10:26:01.203571   12642 logs.go:276] 1 containers: [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8]
	I0916 10:26:01.203632   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.207027   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:26:01.207101   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:26:01.240766   12642 cri.go:89] found id: "23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.240787   12642 cri.go:89] found id: ""
	I0916 10:26:01.240795   12642 logs.go:276] 1 containers: [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316]
	I0916 10:26:01.240847   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.244187   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:26:01.244242   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:26:01.278657   12642 cri.go:89] found id: "8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.278686   12642 cri.go:89] found id: ""
	I0916 10:26:01.278696   12642 logs.go:276] 1 containers: [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6]
	I0916 10:26:01.278754   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.282264   12642 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:26:01.282333   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:26:01.316408   12642 cri.go:89] found id: "319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.316431   12642 cri.go:89] found id: ""
	I0916 10:26:01.316439   12642 logs.go:276] 1 containers: [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca]
	I0916 10:26:01.316481   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.319848   12642 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:26:01.319913   12642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:26:01.352617   12642 cri.go:89] found id: "e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.352637   12642 cri.go:89] found id: ""
	I0916 10:26:01.352645   12642 logs.go:276] 1 containers: [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101]
	I0916 10:26:01.352692   12642 ssh_runner.go:195] Run: which crictl
	I0916 10:26:01.356052   12642 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:26:01.356078   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:26:01.430171   12642 logs.go:123] Gathering logs for container status ...
	I0916 10:26:01.430203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:26:01.471970   12642 logs.go:123] Gathering logs for kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] ...
	I0916 10:26:01.472001   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316"
	I0916 10:26:01.512405   12642 logs.go:123] Gathering logs for kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] ...
	I0916 10:26:01.512437   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6"
	I0916 10:26:01.545482   12642 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:26:01.545511   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:26:01.657458   12642 logs.go:123] Gathering logs for kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] ...
	I0916 10:26:01.657495   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7"
	I0916 10:26:01.703167   12642 logs.go:123] Gathering logs for etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] ...
	I0916 10:26:01.703203   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e"
	I0916 10:26:01.753488   12642 logs.go:123] Gathering logs for coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] ...
	I0916 10:26:01.753528   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8"
	I0916 10:26:01.788778   12642 logs.go:123] Gathering logs for kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] ...
	I0916 10:26:01.788809   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca"
	I0916 10:26:01.847216   12642 logs.go:123] Gathering logs for kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] ...
	I0916 10:26:01.847252   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101"
	I0916 10:26:01.883444   12642 logs.go:123] Gathering logs for kubelet ...
	I0916 10:26:01.883479   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:26:01.950602   12642 logs.go:123] Gathering logs for dmesg ...
	I0916 10:26:01.950637   12642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:26:04.473621   12642 system_pods.go:59] 19 kube-system pods found
	I0916 10:26:04.473667   12642 system_pods.go:61] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.473674   12642 system_pods.go:61] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.473678   12642 system_pods.go:61] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.473681   12642 system_pods.go:61] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.473685   12642 system_pods.go:61] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.473688   12642 system_pods.go:61] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.473692   12642 system_pods.go:61] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.473696   12642 system_pods.go:61] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.473699   12642 system_pods.go:61] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.473702   12642 system_pods.go:61] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.473706   12642 system_pods.go:61] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.473709   12642 system_pods.go:61] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.473712   12642 system_pods.go:61] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.473715   12642 system_pods.go:61] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.473718   12642 system_pods.go:61] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.473722   12642 system_pods.go:61] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.473725   12642 system_pods.go:61] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.473728   12642 system_pods.go:61] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.473731   12642 system_pods.go:61] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.473737   12642 system_pods.go:74] duration metric: took 3.376089349s to wait for pod list to return data ...
	I0916 10:26:04.473747   12642 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:26:04.476243   12642 default_sa.go:45] found service account: "default"
	I0916 10:26:04.476265   12642 default_sa.go:55] duration metric: took 2.512507ms for default service account to be created ...
	I0916 10:26:04.476273   12642 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:26:04.484719   12642 system_pods.go:86] 19 kube-system pods found
	I0916 10:26:04.484756   12642 system_pods.go:89] "coredns-7c65d6cfc9-f6b44" [486d40ce-7ea8-4bbb-a858-d8c7dabcd8de] Running
	I0916 10:26:04.484762   12642 system_pods.go:89] "csi-hostpath-attacher-0" [05466a38-d5d0-4850-a6ee-05a0a811e7e3] Running
	I0916 10:26:04.484766   12642 system_pods.go:89] "csi-hostpath-resizer-0" [3c7e8ccf-9d96-48c9-9ce8-67cff96124bf] Running
	I0916 10:26:04.484770   12642 system_pods.go:89] "csi-hostpathplugin-pwtwp" [b2e904a0-1c8b-4229-a3f2-1de5b69d5c5a] Running
	I0916 10:26:04.484774   12642 system_pods.go:89] "etcd-addons-821781" [aa22e2f6-be68-4f6e-87fe-c60b1829e2f0] Running
	I0916 10:26:04.484778   12642 system_pods.go:89] "kindnet-2bwl4" [50685297-f317-40a6-bcd6-5892df8b9a1d] Running
	I0916 10:26:04.484782   12642 system_pods.go:89] "kube-apiserver-addons-821781" [497d7ac8-f99e-436a-a98b-deaf656fda24] Running
	I0916 10:26:04.484786   12642 system_pods.go:89] "kube-controller-manager-addons-821781" [d9f0daad-0ea9-4dd7-a176-0f010b96bae4] Running
	I0916 10:26:04.484790   12642 system_pods.go:89] "kube-ingress-dns-minikube" [94151fd8-76ae-45b4-82dc-e1717717bd78] Running
	I0916 10:26:04.484796   12642 system_pods.go:89] "kube-proxy-7grrw" [1f2a18f6-a131-4878-8520-707c1e72b33c] Running
	I0916 10:26:04.484800   12642 system_pods.go:89] "kube-scheduler-addons-821781" [6764ba7d-4081-4740-b64d-ab998d7e694b] Running
	I0916 10:26:04.484803   12642 system_pods.go:89] "metrics-server-84c5f94fbc-t6sfx" [82f2a6b8-aafa-4f82-a707-d4bdaedd415d] Running
	I0916 10:26:04.484807   12642 system_pods.go:89] "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
	I0916 10:26:04.484812   12642 system_pods.go:89] "registry-66c9cd494c-48kvj" [36c41e69-8354-4fce-98a3-99b23a9ab570] Running
	I0916 10:26:04.484818   12642 system_pods.go:89] "registry-proxy-hbwdk" [44cd3bc9-5996-4fb6-b54d-fe98c6c50a75] Running
	I0916 10:26:04.484822   12642 system_pods.go:89] "snapshot-controller-56fcc65765-b752p" [bef8c9e1-c757-4d0a-a60a-c1273a1fc66b] Running
	I0916 10:26:04.484826   12642 system_pods.go:89] "snapshot-controller-56fcc65765-tdxm7" [759c672b-f4bc-4223-ac65-ac1287624e79] Running
	I0916 10:26:04.484830   12642 system_pods.go:89] "storage-provisioner" [87ba07d9-0493-4c14-a34b-5d3a24e24a15] Running
	I0916 10:26:04.484834   12642 system_pods.go:89] "tiller-deploy-b48cc5f79-jcsqv" [3177a86a-dac6-4f73-acef-e8b6f8c0aed1] Running
	I0916 10:26:04.484840   12642 system_pods.go:126] duration metric: took 8.563189ms to wait for k8s-apps to be running ...
	I0916 10:26:04.484851   12642 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:26:04.484897   12642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:26:04.496212   12642 system_svc.go:56] duration metric: took 11.351945ms WaitForService to wait for kubelet
	I0916 10:26:04.496239   12642 kubeadm.go:582] duration metric: took 2m11.67966753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:26:04.496261   12642 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:26:04.499350   12642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:26:04.499377   12642 node_conditions.go:123] node cpu capacity is 8
	I0916 10:26:04.499389   12642 node_conditions.go:105] duration metric: took 3.122952ms to run NodePressure ...
	I0916 10:26:04.499400   12642 start.go:241] waiting for startup goroutines ...
	I0916 10:26:04.499406   12642 start.go:246] waiting for cluster config update ...
	I0916 10:26:04.499455   12642 start.go:255] writing updated cluster config ...
	I0916 10:26:04.519561   12642 ssh_runner.go:195] Run: rm -f paused
	I0916 10:26:04.665202   12642 out.go:177] * Done! kubectl is now configured to use "addons-821781" cluster and "default" namespace by default
	E0916 10:26:04.666644   12642 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:27:28 addons-821781 crio[1028]: time="2024-09-16 10:27:28.002745748Z" level=info msg="Removed container 960e66cd3823f16f4a22eb9ac13bfa9f841ffe00738d7d8dd8b1aa8358772c0f: kube-system/tiller-deploy-b48cc5f79-jcsqv/tiller" id=20c33ef3-37f7-4f43-97d7-23b173848fd1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.198286631Z" level=info msg="Stopping pod sandbox: 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=3c0fdf6d-b3ae-4175-9d87-3618e8f4f71c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.198348502Z" level=info msg="Stopped pod sandbox (already stopped): 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=3c0fdf6d-b3ae-4175-9d87-3618e8f4f71c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.198642882Z" level=info msg="Removing pod sandbox: 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=23484731-db4a-4bfc-b932-8b78b207f3c5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.205894535Z" level=info msg="Removed pod sandbox: 300e5b8a22c3edbb8b2b84410c6e22ea3bb4d309590d099249c250241dd694ed" id=23484731-db4a-4bfc-b932-8b78b207f3c5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.206314240Z" level=info msg="Stopping pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=00c60787-d056-4d82-a5ba-1ba34f5aae8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.206350928Z" level=info msg="Stopped pod sandbox (already stopped): a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=00c60787-d056-4d82-a5ba-1ba34f5aae8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.206580298Z" level=info msg="Removing pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=9f9aa37c-fc7b-4d1a-af38-b4061811956f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.213389226Z" level=info msg="Removed pod sandbox: a1c04690939ae20ba7c3a3056ddf0160aa28699779393f2006bda294b833ca9a" id=9f9aa37c-fc7b-4d1a-af38-b4061811956f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.213824980Z" level=info msg="Stopping pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=ed2f42f9-aa3a-40c9-8df8-926cdbd385ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.213871523Z" level=info msg="Stopped pod sandbox (already stopped): 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=ed2f42f9-aa3a-40c9-8df8-926cdbd385ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.214166373Z" level=info msg="Removing pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=2da8c7e3-71a8-4291-bcf0-102db2d873de name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:27:47 addons-821781 crio[1028]: time="2024-09-16 10:27:47.221101717Z" level=info msg="Removed pod sandbox: 5f0be722b34e2960b568427815c79c725c4b3d6a5ca241d24030aba38a8707fc" id=2da8c7e3-71a8-4291-bcf0-102db2d873de name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:31:28 addons-821781 crio[1028]: time="2024-09-16 10:31:28.157520629Z" level=info msg="Stopping container: 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302 (timeout: 30s)" id=7de2b76d-f0b1-40f7-87e6-4a8075cdda9b name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:31:29 addons-821781 crio[1028]: time="2024-09-16 10:31:29.297752827Z" level=info msg="Stopped container 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302: kube-system/metrics-server-84c5f94fbc-t6sfx/metrics-server" id=7de2b76d-f0b1-40f7-87e6-4a8075cdda9b name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:31:29 addons-821781 crio[1028]: time="2024-09-16 10:31:29.298299552Z" level=info msg="Stopping pod sandbox: a92ded8c2c84eb598ebf0cf48eef392c2b180d545927d5478799800dfd280086" id=1304cb2d-d045-40be-b67a-62ca088cc0eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:31:29 addons-821781 crio[1028]: time="2024-09-16 10:31:29.298539348Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-t6sfx Namespace:kube-system ID:a92ded8c2c84eb598ebf0cf48eef392c2b180d545927d5478799800dfd280086 UID:82f2a6b8-aafa-4f82-a707-d4bdaedd415d NetNS:/var/run/netns/ef0ae5d5-88f1-4ca4-b6b0-7cf3241576cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:31:29 addons-821781 crio[1028]: time="2024-09-16 10:31:29.298718318Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-t6sfx from CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:31:29 addons-821781 crio[1028]: time="2024-09-16 10:31:29.330816965Z" level=info msg="Stopped pod sandbox: a92ded8c2c84eb598ebf0cf48eef392c2b180d545927d5478799800dfd280086" id=1304cb2d-d045-40be-b67a-62ca088cc0eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:31:29 addons-821781 crio[1028]: time="2024-09-16 10:31:29.516573382Z" level=info msg="Removing container: 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302" id=d522c000-0c58-4321-9d75-3f9a6a8ab2a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:31:29 addons-821781 crio[1028]: time="2024-09-16 10:31:29.533073837Z" level=info msg="Removed container 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302: kube-system/metrics-server-84c5f94fbc-t6sfx/metrics-server" id=d522c000-0c58-4321-9d75-3f9a6a8ab2a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:31:47 addons-821781 crio[1028]: time="2024-09-16 10:31:47.234924979Z" level=info msg="Stopping pod sandbox: a92ded8c2c84eb598ebf0cf48eef392c2b180d545927d5478799800dfd280086" id=39961741-665a-47e8-a24d-ecf2ba8d8686 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:31:47 addons-821781 crio[1028]: time="2024-09-16 10:31:47.234964535Z" level=info msg="Stopped pod sandbox (already stopped): a92ded8c2c84eb598ebf0cf48eef392c2b180d545927d5478799800dfd280086" id=39961741-665a-47e8-a24d-ecf2ba8d8686 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:31:47 addons-821781 crio[1028]: time="2024-09-16 10:31:47.235238894Z" level=info msg="Removing pod sandbox: a92ded8c2c84eb598ebf0cf48eef392c2b180d545927d5478799800dfd280086" id=24be5bec-18e3-46bc-9f11-8f7aa20c914f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:31:47 addons-821781 crio[1028]: time="2024-09-16 10:31:47.241905395Z" level=info msg="Removed pod sandbox: a92ded8c2c84eb598ebf0cf48eef392c2b180d545927d5478799800dfd280086" id=24be5bec-18e3-46bc-9f11-8f7aa20c914f name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0dbc187486a77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 6 minutes ago       Running             gcp-auth                                 0                   754882dcda596       gcp-auth-89d5ffd79-b6kzx
	3603c45c1e4ab       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             6 minutes ago       Running             controller                               0                   31855714f04d8       ingress-nginx-controller-bc57996ff-8jlsc
	b6501ff69088d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	85a5122ba30eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	33527f5387a55       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	2b3dcba2a09e7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ea5a7e7486ae3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	5247d23b3a397       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   5faba155231dd       snapshot-controller-56fcc65765-tdxm7
	68547a0643ba6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   4cb61d4296010       csi-hostpath-resizer-0
	a2eec9453e9d3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   205f02ffaeb65       csi-hostpath-attacher-0
	d3033819602e2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   402c15a75f3c1       csi-hostpathplugin-pwtwp
	ffffb6d23a520       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   6 minutes ago       Exited              patch                                    0                   0defdefc8e690       ingress-nginx-admission-patch-22v56
	adcb6aad69051       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   b44ff8bf56a7c       snapshot-controller-56fcc65765-b752p
	d7c74998aab32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   6 minutes ago       Exited              create                                   0                   92efe213e3cc9       ingress-nginx-admission-create-dgb9n
	318be751079db       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   cdfaa5befff59       local-path-provisioner-86d989889c-6xhgj
	9db25418c7b36       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             7 minutes ago       Running             minikube-ingress-dns                     0                   0a160d796662b       kube-ingress-dns-minikube
	fd1c0fa2e8742       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   578052293e511       storage-provisioner
	5fc078f948938       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             7 minutes ago       Running             coredns                                  0                   dd25c29f2c98b       coredns-7c65d6cfc9-f6b44
	8953bd3ac9bbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             8 minutes ago       Running             kube-proxy                               0                   31612ec902e41       kube-proxy-7grrw
	e3e02e9338f21       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             8 minutes ago       Running             kindnet-cni                              0                   efca226e04346       kindnet-2bwl4
	f7c9dd60c650e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             8 minutes ago       Running             kube-apiserver                           0                   325d1d3961d30       kube-apiserver-addons-821781
	aef3299386ef0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             8 minutes ago       Running             etcd                                     0                   5db6677261478       etcd-addons-821781
	23817b3f6401e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             8 minutes ago       Running             kube-scheduler                           0                   192ccdf49d648       kube-scheduler-addons-821781
	319dfee9ab334       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             8 minutes ago       Running             kube-controller-manager                  0                   471807181e888       kube-controller-manager-addons-821781
	
	
	==> coredns [5fc078f948938114660b02640bfe9e5a3f7ce8c6c4921acab361abb69e4dc8e8] <==
	[INFO] 10.244.0.11:54433 - 5196 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117872s
	[INFO] 10.244.0.11:55203 - 39009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079023s
	[INFO] 10.244.0.11:55203 - 18278 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066179s
	[INFO] 10.244.0.11:53992 - 3361 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005725192s
	[INFO] 10.244.0.11:53992 - 5182 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005902528s
	[INFO] 10.244.0.11:58640 - 39752 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005962306s
	[INFO] 10.244.0.11:58640 - 45636 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007442692s
	[INFO] 10.244.0.11:58081 - 46876 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004814518s
	[INFO] 10.244.0.11:58081 - 7960 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005069952s
	[INFO] 10.244.0.11:56786 - 21825 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084442s
	[INFO] 10.244.0.11:56786 - 8540 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121405s
	[INFO] 10.244.0.21:49162 - 58748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183854s
	[INFO] 10.244.0.21:60540 - 21143 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264439s
	[INFO] 10.244.0.21:57612 - 22108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123843s
	[INFO] 10.244.0.21:56370 - 29690 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174744s
	[INFO] 10.244.0.21:53939 - 42345 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115165s
	[INFO] 10.244.0.21:54191 - 30184 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102696s
	[INFO] 10.244.0.21:43721 - 49242 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007714914s
	[INFO] 10.244.0.21:58502 - 61297 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008280312s
	[INFO] 10.244.0.21:45585 - 36043 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008154564s
	[INFO] 10.244.0.21:50514 - 10749 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008661461s
	[INFO] 10.244.0.21:41083 - 31758 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006832696s
	[INFO] 10.244.0.21:53762 - 8306 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007439813s
	[INFO] 10.244.0.21:37796 - 13809 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002178233s
	[INFO] 10.244.0.21:36516 - 28559 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002337896s
	
	
	==> describe nodes <==
	Name:               addons-821781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-821781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-821781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-821781
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-821781"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-821781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:31:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:23:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:27:21 +0000   Mon, 16 Sep 2024 10:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-821781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a93a1abfd8e74fb89ecb0b25fd80b840
	  System UUID:                c474d608-aa29-4551-b357-d17e9479a01d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-b6kzx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8jlsc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m9s
	  kube-system                 coredns-7c65d6cfc9-f6b44                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m15s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 csi-hostpathplugin-pwtwp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 etcd-addons-821781                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m20s
	  kube-system                 kindnet-2bwl4                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m15s
	  kube-system                 kube-apiserver-addons-821781                250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-controller-manager-addons-821781       200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-proxy-7grrw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-scheduler-addons-821781                100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 snapshot-controller-56fcc65765-b752p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 snapshot-controller-56fcc65765-tdxm7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  local-path-storage          local-path-provisioner-86d989889c-6xhgj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 8m14s  kube-proxy       
	  Normal   Starting                 8m20s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m20s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m20s  kubelet          Node addons-821781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m20s  kubelet          Node addons-821781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m20s  kubelet          Node addons-821781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m16s  node-controller  Node addons-821781 event: Registered Node addons-821781 in Controller
	  Normal   NodeReady                7m34s  kubelet          Node addons-821781 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [aef3299386ef0b5e614f1782f09a3e4eb3774331f3c09cf0faefd64bff15f39e] <==
	{"level":"warn","ts":"2024-09-16T10:24:33.965134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.284694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-09-16T10:24:33.965140Z","caller":"traceutil/trace.go:171","msg":"trace[589393049] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.482158ms","start":"2024-09-16T10:24:33.834652Z","end":"2024-09-16T10:24:33.965134Z","steps":["trace[589393049] 'agreement among raft nodes before linearized reading'  (duration: 130.392783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.112983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs\" ","response":"range_response_count:1 size:560"}
	{"level":"warn","ts":"2024-09-16T10:24:33.965172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.412831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/default\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964790Z","caller":"traceutil/trace.go:171","msg":"trace[1719481168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:1; response_revision:871; }","duration":"130.308398ms","start":"2024-09-16T10:24:33.834475Z","end":"2024-09-16T10:24:33.964784Z","steps":["trace[1719481168] 'agreement among raft nodes before linearized reading'  (duration: 130.231604ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965031Z","caller":"traceutil/trace.go:171","msg":"trace[1439753586] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:871; }","duration":"130.351105ms","start":"2024-09-16T10:24:33.834675Z","end":"2024-09-16T10:24:33.965026Z","steps":["trace[1439753586] 'agreement among raft nodes before linearized reading'  (duration: 130.285964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.622694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
	{"level":"info","ts":"2024-09-16T10:24:33.965260Z","caller":"traceutil/trace.go:171","msg":"trace[3301844] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.644948ms","start":"2024-09-16T10:24:33.834605Z","end":"2024-09-16T10:24:33.965250Z","steps":["trace[3301844] 'agreement among raft nodes before linearized reading'  (duration: 130.58562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.745393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:1 size:878"}
	{"level":"info","ts":"2024-09-16T10:24:33.965091Z","caller":"traceutil/trace.go:171","msg":"trace[630312888] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.242708ms","start":"2024-09-16T10:24:33.834842Z","end":"2024-09-16T10:24:33.965085Z","steps":["trace[630312888] 'agreement among raft nodes before linearized reading'  (duration: 130.2013ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965306Z","caller":"traceutil/trace.go:171","msg":"trace[687212945] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:1; response_revision:871; }","duration":"130.768911ms","start":"2024-09-16T10:24:33.834532Z","end":"2024-09-16T10:24:33.965301Z","steps":["trace[687212945] 'agreement among raft nodes before linearized reading'  (duration: 130.728326ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965159Z","caller":"traceutil/trace.go:171","msg":"trace[1851867066] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:871; }","duration":"130.30942ms","start":"2024-09-16T10:24:33.834844Z","end":"2024-09-16T10:24:33.965154Z","steps":["trace[1851867066] 'agreement among raft nodes before linearized reading'  (duration: 130.267065ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965180Z","caller":"traceutil/trace.go:171","msg":"trace[395277833] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"130.138451ms","start":"2024-09-16T10:24:33.835036Z","end":"2024-09-16T10:24:33.965175Z","steps":["trace[395277833] 'agreement among raft nodes before linearized reading'  (duration: 130.084008ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.964761Z","caller":"traceutil/trace.go:171","msg":"trace[1846466404] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:871; }","duration":"130.050288ms","start":"2024-09-16T10:24:33.834699Z","end":"2024-09-16T10:24:33.964750Z","steps":["trace[1846466404] 'agreement among raft nodes before linearized reading'  (duration: 129.823354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.867331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-09-16T10:24:33.964791Z","caller":"traceutil/trace.go:171","msg":"trace[1570104672] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"101.79293ms","start":"2024-09-16T10:24:33.862992Z","end":"2024-09-16T10:24:33.964785Z","steps":["trace[1570104672] 'agreement among raft nodes before linearized reading'  (duration: 101.763738ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965421Z","caller":"traceutil/trace.go:171","msg":"trace[1827982125] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:871; }","duration":"130.890995ms","start":"2024-09-16T10:24:33.834525Z","end":"2024-09-16T10:24:33.965416Z","steps":["trace[1827982125] 'agreement among raft nodes before linearized reading'  (duration: 130.852764ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965209Z","caller":"traceutil/trace.go:171","msg":"trace[945447364] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/default; range_end:; response_count:1; response_revision:871; }","duration":"130.449227ms","start":"2024-09-16T10:24:33.834754Z","end":"2024-09-16T10:24:33.965203Z","steps":["trace[945447364] 'agreement among raft nodes before linearized reading'  (duration: 130.396497ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.965508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.001003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-16T10:24:33.965579Z","caller":"traceutil/trace.go:171","msg":"trace[1490541276] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:871; }","duration":"131.063942ms","start":"2024-09-16T10:24:33.834502Z","end":"2024-09-16T10:24:33.965566Z","steps":["trace[1490541276] 'agreement among raft nodes before linearized reading'  (duration: 130.98224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:33.964852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.18611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/snapshot-controller\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2024-09-16T10:24:33.965093Z","caller":"traceutil/trace.go:171","msg":"trace[1524858032] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:871; }","duration":"129.821011ms","start":"2024-09-16T10:24:33.835267Z","end":"2024-09-16T10:24:33.965088Z","steps":["trace[1524858032] 'agreement among raft nodes before linearized reading'  (duration: 129.760392ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:24:33.965632Z","caller":"traceutil/trace.go:171","msg":"trace[945136232] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/snapshot-controller; range_end:; response_count:1; response_revision:871; }","duration":"129.963575ms","start":"2024-09-16T10:24:33.835661Z","end":"2024-09-16T10:24:33.965624Z","steps":["trace[945136232] 'agreement among raft nodes before linearized reading'  (duration: 129.14136ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:26.413976Z","caller":"traceutil/trace.go:171","msg":"trace[182413184] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"129.574416ms","start":"2024-09-16T10:25:26.284376Z","end":"2024-09-16T10:25:26.413950Z","steps":["trace[182413184] 'process raft request'  (duration: 67.733345ms)","trace[182413184] 'compare'  (duration: 61.701552ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:48.300626Z","caller":"traceutil/trace.go:171","msg":"trace[869038067] transaction","detail":"{read_only:false; response_revision:1265; number_of_response:1; }","duration":"110.748846ms","start":"2024-09-16T10:25:48.189856Z","end":"2024-09-16T10:25:48.300605Z","steps":["trace[869038067] 'process raft request'  (duration: 107.391476ms)"],"step_count":1}
	
	
	==> gcp-auth [0dbc187486a77d691a5db4775360d83cdf6dd7084d4c3bd9123b7e051fd6bd74] <==
	2024/09/16 10:25:47 GCP Auth Webhook started!
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	2024/09/16 10:26:53 Ready to marshal response ...
	2024/09/16 10:26:53 Ready to write response ...
	
	
	==> kernel <==
	 10:32:07 up 14 min,  0 users,  load average: 0.06, 0.34, 0.26
	Linux addons-821781 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e3e02e9338f21aaebd9581cc8718aafa13d14ee40cd7b3e00ce784cac690f101] <==
	I0916 10:30:03.305028       1 main.go:299] handling current node
	I0916 10:30:13.305464       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:13.305594       1 main.go:299] handling current node
	I0916 10:30:23.305490       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:23.305568       1 main.go:299] handling current node
	I0916 10:30:33.304728       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:33.304762       1 main.go:299] handling current node
	I0916 10:30:43.305391       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:43.305423       1 main.go:299] handling current node
	I0916 10:30:53.298935       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:53.298976       1 main.go:299] handling current node
	I0916 10:31:03.301461       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:03.301497       1 main.go:299] handling current node
	I0916 10:31:13.305439       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:13.305475       1 main.go:299] handling current node
	I0916 10:31:23.305425       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:23.305467       1 main.go:299] handling current node
	I0916 10:31:33.301408       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:33.301440       1 main.go:299] handling current node
	I0916 10:31:43.298858       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:43.298909       1 main.go:299] handling current node
	I0916 10:31:53.298355       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:53.298396       1 main.go:299] handling current node
	I0916 10:32:03.300620       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:03.300650       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f7c9dd60c650e6e531f3a674b5312cb81a0d1e52d071fe375b7e3e4306a0e6b7] <==
	W0916 10:24:33.565951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.565953       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	E0916 10:24:33.565979       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:33.599472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.58.20:443: connect: connection refused
	E0916 10:24:33.599513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.58.20:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:58.720213       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 10:24:58.720232       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:58.720259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 10:24:58.720301       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:24:58.721354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 10:24:58.721362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:25:54.202103       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:25:54.202136       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.74.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.74.143:443: connect: connection refused" logger="UnhandledError"
	E0916 10:25:54.202195       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:25:54.215066       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:26:47.647164       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:26:48.662402       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:26:53.534738       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.40.159"}
	I0916 10:31:55.224531       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [319dfee9ab3346abcf747e198dfbb926507cfe452a419b03ec8d5992eb8f41ca] <==
	I0916 10:26:57.755257       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:26:57.926605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="51.47µs"
	I0916 10:26:57.939305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.337707ms"
	I0916 10:26:57.939375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="37.082µs"
	I0916 10:27:04.034685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="8.781µs"
	W0916 10:27:04.365551       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:04.365591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:27:14.151507       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0916 10:27:21.385941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-821781"
	I0916 10:27:27.020674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="7.724µs"
	W0916 10:27:28.351938       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:28.351975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:28:01.562193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:28:01.562231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:28:51.468704       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:28:51.468745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:29:33.531148       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:29:33.531188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:30:17.809574       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:17.809622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:31:12.836852       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:31:12.836900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:31:28.145289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="5.298µs"
	W0916 10:32:02.816718       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:32:02.816762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8953bd3ac9bbec3334fc83f9ed0f2cf7a04a42f2c0323224b83003a9cf8b8cf6] <==
	I0916 10:23:52.638596       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:52.921753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:52.922374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:53.313675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:53.319718       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:53.497957       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:53.508623       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:53.508659       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:53.510794       1 config.go:199] "Starting service config controller"
	I0916 10:23:53.510833       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:53.510868       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:53.510874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:53.511480       1 config.go:328] "Starting node config controller"
	I0916 10:23:53.511491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:53.617474       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:53.617556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:23:53.711794       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23817b3f6401ec0da3cbedd92fc14f63f50dbbaefce66178a9fa3b67f7491316] <==
	W0916 10:23:44.897301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0916 10:23:44.897124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:44.898296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:44.897140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:44.898337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:44.898344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:45.722888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.722892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:45.722927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.731239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.731280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.734491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:23:45.734527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.741804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.741845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.771121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:45.771158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.886831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.886867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:45.913242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:45.913290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:46.023935       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:46.023972       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:23:48.220429       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:31:07 addons-821781 kubelet[1623]: E0916 10:31:07.280051    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482667279785517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:07 addons-821781 kubelet[1623]: E0916 10:31:07.280087    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482667279785517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:17 addons-821781 kubelet[1623]: E0916 10:31:17.283113    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482677282874049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:17 addons-821781 kubelet[1623]: E0916 10:31:17.283145    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482677282874049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:27 addons-821781 kubelet[1623]: E0916 10:31:27.285622    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482687285376468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:27 addons-821781 kubelet[1623]: E0916 10:31:27.285654    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482687285376468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.402891    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pcs7\" (UniqueName: \"kubernetes.io/projected/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-kube-api-access-5pcs7\") pod \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\" (UID: \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\") "
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.402951    1623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-tmp-dir\") pod \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\" (UID: \"82f2a6b8-aafa-4f82-a707-d4bdaedd415d\") "
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.403329    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "82f2a6b8-aafa-4f82-a707-d4bdaedd415d" (UID: "82f2a6b8-aafa-4f82-a707-d4bdaedd415d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.404946    1623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-kube-api-access-5pcs7" (OuterVolumeSpecName: "kube-api-access-5pcs7") pod "82f2a6b8-aafa-4f82-a707-d4bdaedd415d" (UID: "82f2a6b8-aafa-4f82-a707-d4bdaedd415d"). InnerVolumeSpecName "kube-api-access-5pcs7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.503499    1623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5pcs7\" (UniqueName: \"kubernetes.io/projected/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-kube-api-access-5pcs7\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.503539    1623 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/82f2a6b8-aafa-4f82-a707-d4bdaedd415d-tmp-dir\") on node \"addons-821781\" DevicePath \"\""
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.515541    1623 scope.go:117] "RemoveContainer" containerID="2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.533377    1623 scope.go:117] "RemoveContainer" containerID="2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: E0916 10:31:29.533950    1623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302\": container with ID starting with 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302 not found: ID does not exist" containerID="2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"
	Sep 16 10:31:29 addons-821781 kubelet[1623]: I0916 10:31:29.533994    1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302"} err="failed to get container status \"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302\": rpc error: code = NotFound desc = could not find container \"2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302\": container with ID starting with 2a650198714d35c247082f0f70ae75fbf54ae90df78c89c3d9ffa6825da26302 not found: ID does not exist"
	Sep 16 10:31:31 addons-821781 kubelet[1623]: I0916 10:31:31.109067    1623 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82f2a6b8-aafa-4f82-a707-d4bdaedd415d" path="/var/lib/kubelet/pods/82f2a6b8-aafa-4f82-a707-d4bdaedd415d/volumes"
	Sep 16 10:31:37 addons-821781 kubelet[1623]: E0916 10:31:37.287802    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482697287526991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:37 addons-821781 kubelet[1623]: E0916 10:31:37.287835    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482697287526991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:47 addons-821781 kubelet[1623]: E0916 10:31:47.289550    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482707289295313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:47 addons-821781 kubelet[1623]: E0916 10:31:47.289592    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482707289295313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:57 addons-821781 kubelet[1623]: E0916 10:31:57.292296    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482717292073207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:57 addons-821781 kubelet[1623]: E0916 10:31:57.292329    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482717292073207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:32:07 addons-821781 kubelet[1623]: E0916 10:32:07.294938    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482727294695486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:32:07 addons-821781 kubelet[1623]: E0916 10:32:07.294969    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482727294695486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:483736,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [fd1c0fa2e8742125904216a45b6d84f9b367888422cb6083d3e482fd77452994] <==
	I0916 10:24:34.797513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:34.805288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:34.805397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:34.813404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:34.813588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	I0916 10:24:34.814304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d6ca95d-581a-4537-b803-ac9e02f43ec1", APIVersion:"v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4 became leader
	I0916 10:24:34.914571       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-821781_2e23b838-a586-49c2-aa56-0c6b34db9fc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-821781 -n addons-821781
helpers_test.go:261: (dbg) Run:  kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (488.534µs)
helpers_test.go:263: kubectl --context addons-821781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/CSI (362.01s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-821781 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:982: (dbg) Non-zero exit: kubectl --context addons-821781 apply -f testdata/storage-provisioner-rancher/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (325.272µs)
addons_test.go:984: kubectl apply pvc.yaml failed: args "kubectl --context addons-821781 apply -f testdata/storage-provisioner-rancher/pvc.yaml": fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (25.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-904767 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-904767 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.32945221s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-904767 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-904767 config view
cert_options_test.go:88: (dbg) Non-zero exit: kubectl --context cert-options-904767 config view: fork/exec /usr/local/bin/kubectl: exec format error (486.967µs)
cert_options_test.go:90: failed to get kubectl config. args "kubectl --context cert-options-904767 config view" : fork/exec /usr/local/bin/kubectl: exec format error
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = ""
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-904767 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-16 11:07:16.049117514 +0000 UTC m=+2693.895898937
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-904767
helpers_test.go:235: (dbg) docker inspect cert-options-904767:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aaa76c659567b0caa8cd2894e8ce96f6964af4390ef8de7ef1f037b6422c41c0",
	        "Created": "2024-09-16T11:06:59.833864248Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:06:59.952830341Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/aaa76c659567b0caa8cd2894e8ce96f6964af4390ef8de7ef1f037b6422c41c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aaa76c659567b0caa8cd2894e8ce96f6964af4390ef8de7ef1f037b6422c41c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/aaa76c659567b0caa8cd2894e8ce96f6964af4390ef8de7ef1f037b6422c41c0/hosts",
	        "LogPath": "/var/lib/docker/containers/aaa76c659567b0caa8cd2894e8ce96f6964af4390ef8de7ef1f037b6422c41c0/aaa76c659567b0caa8cd2894e8ce96f6964af4390ef8de7ef1f037b6422c41c0-json.log",
	        "Name": "/cert-options-904767",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-options-904767:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "cert-options-904767",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/80df4776baafa89ab6a8d32d6778e41c3ace1fa7d05d8808cda81ef60240922e-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80df4776baafa89ab6a8d32d6778e41c3ace1fa7d05d8808cda81ef60240922e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80df4776baafa89ab6a8d32d6778e41c3ace1fa7d05d8808cda81ef60240922e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80df4776baafa89ab6a8d32d6778e41c3ace1fa7d05d8808cda81ef60240922e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "cert-options-904767",
	                "Source": "/var/lib/docker/volumes/cert-options-904767/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-options-904767",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8555/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-options-904767",
	                "name.minikube.sigs.k8s.io": "cert-options-904767",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73767b11b1052a480e8d82d82e7830c3a1b8d393c7a7075dfa28467ed53d9c15",
	            "SandboxKey": "/var/run/docker/netns/73767b11b105",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "cert-options-904767": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ccef3dfac4ce9fee854493ac418fd0b309f173e9695e83042552282d3666dedd",
	                    "EndpointID": "493c591928341553e6d5d04d5aec3d78a566de597ed43db2882027e00927da76",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "cert-options-904767",
	                        "aaa76c659567"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-options-904767 -n cert-options-904767
helpers_test.go:244: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-904767 logs -n 25
helpers_test.go:252: TestCertOptions logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                              Args                              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| cp      | ha-107957 cp ha-107957:/home/docker/cp-test.txt                | ha-107957                 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03:/home/docker/cp-test_ha-107957_ha-107957-m03.txt |                           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                               | ha-107957                 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957 sudo cat                                             |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                       |                           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m03 sudo cat                        | ha-107957                 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957_ha-107957-m03.txt               |                           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957:/home/docker/cp-test.txt                | ha-107957                 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04:/home/docker/cp-test_ha-107957_ha-107957-m04.txt |                           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                               | ha-107957                 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957 sudo cat                                             |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                       |                           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m04 sudo cat                        | ha-107957                 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957_ha-107957-m04.txt               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-749637                                   | kubernetes-upgrade-749637 | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	| start   | -p kubernetes-upgrade-749637                                   | kubernetes-upgrade-749637 | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC |                     |
	|         | --memory=2200                                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                   |                           |         |         |                     |                     |
	|         | --alsologtostderr                                              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-802794                                      | minikube                  | jenkins | v1.26.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2200                                                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker                                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-911411 stop                                    | minikube                  | jenkins | v1.26.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	| start   | -p stopped-upgrade-911411                                      | stopped-upgrade-911411    | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	|         | --memory=2200                                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-922846                                      | missing-upgrade-922846    | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2200                                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-911411                                      | stopped-upgrade-911411    | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	| start   | -p cert-expiration-997173                                      | cert-expiration-997173    | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2048                                                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                           |                           |         |         |                     |                     |
	|         | --driver=docker                                                |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-802794                                      | running-upgrade-802794    | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2200                                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-802794                                      | running-upgrade-802794    | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	| delete  | -p missing-upgrade-922846                                      | missing-upgrade-922846    | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	| start   | -p force-systemd-flag-587021                                   | force-systemd-flag-587021 | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2048 --force-systemd                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                              |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| start   | -p pause-259137 --memory=2048                                  | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:07 UTC |
	|         | --install-addons=false                                         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-587021 ssh cat                              | force-systemd-flag-587021 | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-587021                                   | force-systemd-flag-587021 | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	| start   | -p cert-options-904767                                         | cert-options-904767       | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:07 UTC |
	|         | --memory=2048                                                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                                  |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                                    |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                               |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                          |                           |         |         |                     |                     |
	|         | --driver=docker                                                |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| start   | -p pause-259137                                                | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | --alsologtostderr                                              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                       |                           |         |         |                     |                     |
	| ssh     | cert-options-904767 ssh                                        | cert-options-904767       | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | openssl x509 -text -noout -in                                  |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                          |                           |         |         |                     |                     |
	| ssh     | -p cert-options-904767 -- sudo                                 | cert-options-904767       | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | cat /etc/kubernetes/admin.conf                                 |                           |         |         |                     |                     |
	|---------|----------------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:07:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:07:13.291488  251845 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:07:13.291613  251845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:07:13.291623  251845 out.go:358] Setting ErrFile to fd 2...
	I0916 11:07:13.291629  251845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:07:13.291821  251845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:07:13.292404  251845 out.go:352] Setting JSON to false
	I0916 11:07:13.293819  251845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2973,"bootTime":1726481860,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:07:13.293967  251845 start.go:139] virtualization: kvm guest
	I0916 11:07:13.297530  251845 out.go:177] * [pause-259137] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:07:13.298966  251845 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:07:13.299020  251845 notify.go:220] Checking for updates...
	I0916 11:07:13.302506  251845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:07:13.303908  251845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:07:13.305120  251845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:07:13.306437  251845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:07:13.307859  251845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:07:13.309670  251845 config.go:182] Loaded profile config "pause-259137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:07:13.310194  251845 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:07:13.341184  251845 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:07:13.341294  251845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:07:13.403608  251845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:07:13.391793524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:07:13.403724  251845 docker.go:318] overlay module found
	I0916 11:07:13.406139  251845 out.go:177] * Using the docker driver based on existing profile
	I0916 11:07:13.407952  251845 start.go:297] selected driver: docker
	I0916 11:07:13.407972  251845 start.go:901] validating driver "docker" against &{Name:pause-259137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-259137 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:07:13.408102  251845 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:07:13.408193  251845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:07:13.467822  251845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:07:13.457374525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:07:13.468427  251845 cni.go:84] Creating CNI manager for ""
	I0916 11:07:13.468477  251845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:07:13.468523  251845 start.go:340] cluster config:
	{Name:pause-259137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-259137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-glus
ter:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:07:13.470805  251845 out.go:177] * Starting "pause-259137" primary control-plane node in "pause-259137" cluster
	I0916 11:07:13.472144  251845 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:07:13.473324  251845 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:07:13.474416  251845 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:07:13.474468  251845 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:07:13.474480  251845 cache.go:56] Caching tarball of preloaded images
	I0916 11:07:13.474508  251845 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:07:13.474576  251845 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:07:13.474591  251845 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:07:13.474710  251845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/pause-259137/config.json ...
	W0916 11:07:13.496264  251845 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:07:13.496284  251845 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:07:13.496375  251845 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:07:13.496392  251845 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:07:13.496398  251845 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:07:13.496407  251845 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:07:13.496414  251845 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:07:13.563618  251845 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:07:13.563675  251845 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:07:13.563735  251845 start.go:360] acquireMachinesLock for pause-259137: {Name:mked4cb5e6f168c48db0774a0cb4096ff21fd6e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:07:13.563815  251845 start.go:364] duration metric: took 55.035µs to acquireMachinesLock for "pause-259137"
	I0916 11:07:13.563839  251845 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:07:13.563849  251845 fix.go:54] fixHost starting: 
	I0916 11:07:13.564110  251845 cli_runner.go:164] Run: docker container inspect pause-259137 --format={{.State.Status}}
	I0916 11:07:13.583396  251845 fix.go:112] recreateIfNeeded on pause-259137: state=Running err=<nil>
	W0916 11:07:13.583436  251845 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:07:13.585916  251845 out.go:177] * Updating the running docker "pause-259137" container ...
	I0916 11:07:12.577917  247685 out.go:235]   - Configuring RBAC rules ...
	I0916 11:07:12.578059  247685 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:07:12.581690  247685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:07:12.588245  247685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:07:12.590946  247685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:07:12.594735  247685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:07:12.597290  247685 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:07:12.931140  247685 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:07:13.361605  247685 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:07:13.929944  247685 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:07:13.930921  247685 kubeadm.go:310] 
	I0916 11:07:13.931032  247685 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:07:13.931038  247685 kubeadm.go:310] 
	I0916 11:07:13.931123  247685 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:07:13.931127  247685 kubeadm.go:310] 
	I0916 11:07:13.931154  247685 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:07:13.931219  247685 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:07:13.931274  247685 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:07:13.931279  247685 kubeadm.go:310] 
	I0916 11:07:13.931338  247685 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:07:13.931343  247685 kubeadm.go:310] 
	I0916 11:07:13.931402  247685 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:07:13.931406  247685 kubeadm.go:310] 
	I0916 11:07:13.931472  247685 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:07:13.931555  247685 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:07:13.931639  247685 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:07:13.931643  247685 kubeadm.go:310] 
	I0916 11:07:13.931743  247685 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:07:13.931829  247685 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:07:13.931834  247685 kubeadm.go:310] 
	I0916 11:07:13.931926  247685 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8555 --token gqsryi.g3mrw156jsxttgb4 \
	I0916 11:07:13.932042  247685 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:07:13.932061  247685 kubeadm.go:310] 	--control-plane 
	I0916 11:07:13.932066  247685 kubeadm.go:310] 
	I0916 11:07:13.932169  247685 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:07:13.932174  247685 kubeadm.go:310] 
	I0916 11:07:13.932259  247685 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8555 --token gqsryi.g3mrw156jsxttgb4 \
	I0916 11:07:13.932376  247685 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:07:13.937665  247685 kubeadm.go:310] W0916 11:07:04.896293    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:07:13.938035  247685 kubeadm.go:310] W0916 11:07:04.897201    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:07:13.938294  247685 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:07:13.938424  247685 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:07:13.938446  247685 cni.go:84] Creating CNI manager for ""
	I0916 11:07:13.938453  247685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:07:13.940835  247685 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:07:13.943181  247685 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:07:13.948077  247685 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:07:13.948088  247685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:07:13.966735  247685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:07:14.196535  247685 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:07:14.196675  247685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:07:14.196761  247685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-options-904767 minikube.k8s.io/updated_at=2024_09_16T11_07_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=cert-options-904767 minikube.k8s.io/primary=true
	I0916 11:07:14.325081  247685 ops.go:34] apiserver oom_adj: -16
	I0916 11:07:14.325122  247685 kubeadm.go:1113] duration metric: took 128.492684ms to wait for elevateKubeSystemPrivileges
	I0916 11:07:14.325148  247685 kubeadm.go:394] duration metric: took 9.611466724s to StartCluster
	I0916 11:07:14.325168  247685 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:07:14.325264  247685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:07:14.326866  247685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:07:14.327137  247685 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:07:14.327224  247685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:07:14.327255  247685 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:07:14.327378  247685 addons.go:69] Setting storage-provisioner=true in profile "cert-options-904767"
	I0916 11:07:14.327397  247685 addons.go:234] Setting addon storage-provisioner=true in "cert-options-904767"
	I0916 11:07:14.327402  247685 addons.go:69] Setting default-storageclass=true in profile "cert-options-904767"
	I0916 11:07:14.327425  247685 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-904767"
	I0916 11:07:14.327427  247685 host.go:66] Checking if "cert-options-904767" exists ...
	I0916 11:07:14.327432  247685 config.go:182] Loaded profile config "cert-options-904767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:07:14.327782  247685 cli_runner.go:164] Run: docker container inspect cert-options-904767 --format={{.State.Status}}
	I0916 11:07:14.327941  247685 cli_runner.go:164] Run: docker container inspect cert-options-904767 --format={{.State.Status}}
	I0916 11:07:14.329116  247685 out.go:177] * Verifying Kubernetes components...
	I0916 11:07:14.331294  247685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:07:14.357601  247685 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:07:14.357779  247685 addons.go:234] Setting addon default-storageclass=true in "cert-options-904767"
	I0916 11:07:14.357814  247685 host.go:66] Checking if "cert-options-904767" exists ...
	I0916 11:07:14.358334  247685 cli_runner.go:164] Run: docker container inspect cert-options-904767 --format={{.State.Status}}
	I0916 11:07:14.359210  247685 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:07:14.359219  247685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:07:14.359259  247685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-904767
	I0916 11:07:14.379376  247685 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:07:14.379387  247685 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:07:14.379444  247685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-904767
	I0916 11:07:14.382464  247685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/cert-options-904767/id_rsa Username:docker}
	I0916 11:07:14.406170  247685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/cert-options-904767/id_rsa Username:docker}
	I0916 11:07:14.521759  247685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:07:14.596327  247685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:07:14.614670  247685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:07:14.618181  247685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:07:15.016167  247685 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0916 11:07:15.017406  247685 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:07:15.017454  247685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:07:15.184917  247685 api_server.go:72] duration metric: took 857.746754ms to wait for apiserver process to appear ...
	I0916 11:07:15.184934  247685 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:07:15.184956  247685 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8555/healthz ...
	I0916 11:07:15.189738  247685 api_server.go:279] https://192.168.76.2:8555/healthz returned 200:
	ok
	I0916 11:07:15.190521  247685 api_server.go:141] control plane version: v1.31.1
	I0916 11:07:15.190535  247685 api_server.go:131] duration metric: took 5.596305ms to wait for apiserver health ...
	I0916 11:07:15.190544  247685 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:07:15.192414  247685 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:07:15.193775  247685 addons.go:510] duration metric: took 866.522122ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:07:15.195835  247685 system_pods.go:59] 5 kube-system pods found
	I0916 11:07:15.195850  247685 system_pods.go:61] "etcd-cert-options-904767" [ebb90100-b15a-43cd-b36d-3365b2423381] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 11:07:15.195856  247685 system_pods.go:61] "kube-apiserver-cert-options-904767" [d584f8ad-d97c-4c48-92ca-81a7230cc262] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 11:07:15.195863  247685 system_pods.go:61] "kube-controller-manager-cert-options-904767" [9f42bee2-d65e-43d1-9390-32204e587232] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 11:07:15.195869  247685 system_pods.go:61] "kube-scheduler-cert-options-904767" [2eef30f3-96b1-493f-9648-e15841f8804e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 11:07:15.195872  247685 system_pods.go:61] "storage-provisioner" [547b4fab-554d-4a5c-9293-a9e9318df9d9] Pending
	I0916 11:07:15.195876  247685 system_pods.go:74] duration metric: took 5.328523ms to wait for pod list to return data ...
	I0916 11:07:15.195883  247685 kubeadm.go:582] duration metric: took 868.721713ms to wait for: map[apiserver:true system_pods:true]
	I0916 11:07:15.195891  247685 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:07:15.198294  247685 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:07:15.198305  247685 node_conditions.go:123] node cpu capacity is 8
	I0916 11:07:15.198314  247685 node_conditions.go:105] duration metric: took 2.420108ms to run NodePressure ...
	I0916 11:07:15.198322  247685 start.go:241] waiting for startup goroutines ...
	I0916 11:07:15.520705  247685 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-options-904767" context rescaled to 1 replicas
	I0916 11:07:15.520737  247685 start.go:246] waiting for cluster config update ...
	I0916 11:07:15.520747  247685 start.go:255] writing updated cluster config ...
	I0916 11:07:15.521162  247685 ssh_runner.go:195] Run: rm -f paused
	I0916 11:07:15.527737  247685 out.go:177] * Done! kubectl is now configured to use "cert-options-904767" cluster and "default" namespace by default
	E0916 11:07:15.529084  247685 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.466462725Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.5.15-0" id=c1dff7e3-7747-4843-b3d8-9e78cc6c30f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.466629796Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,RepoTags:[registry.k8s.io/etcd:3.5.15-0],RepoDigests:[registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a],Size_:149009664,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=c1dff7e3-7747-4843-b3d8-9e78cc6c30f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.467066473Z" level=info msg="Ran pod sandbox 05b5566d9fd6544db9b9201cae5834608f90f79c65f34ef6c49c799f609dcb7f with infra container: kube-system/kube-scheduler-cert-options-904767/POD" id=16d0c0af-6c3a-4f0f-ac3b-9b039b4a56f6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.467137246Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.5.15-0" id=717577ed-8937-4be2-b482-4794d4361c43 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.467295278Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,RepoTags:[registry.k8s.io/etcd:3.5.15-0],RepoDigests:[registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a],Size_:149009664,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=717577ed-8937-4be2-b482-4794d4361c43 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.467835690Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.31.1" id=0419ae6f-2573-440c-b785-5d9830dab190 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.467877289Z" level=info msg="Creating container: kube-system/etcd-cert-options-904767/etcd" id=52664fc6-bbd3-4e43-8336-22ca4fc56daa name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.467942495Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.472572250Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,RepoTags:[registry.k8s.io/kube-scheduler:v1.31.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0 registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8],Size_:68420934,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=0419ae6f-2573-440c-b785-5d9830dab190 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.473197979Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.31.1" id=2ac953cd-241a-443d-b8a3-825e17545d23 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.477477082Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,RepoTags:[registry.k8s.io/kube-scheduler:v1.31.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0 registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8],Size_:68420934,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=2ac953cd-241a-443d-b8a3-825e17545d23 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.478275077Z" level=info msg="Creating container: kube-system/kube-scheduler-cert-options-904767/kube-scheduler" id=c5a279e5-35c1-4cd7-8209-c31c2ed22315 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.478366030Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.605419991Z" level=info msg="Created container 1e236d0b94a4996aa1b7fdcdd7144c86d8c5f7203cda7298b12eb44c65e6dffe: kube-system/kube-controller-manager-cert-options-904767/kube-controller-manager" id=5959d1d1-c97c-4969-900d-4d6000f666d6 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.606359625Z" level=info msg="Starting container: 1e236d0b94a4996aa1b7fdcdd7144c86d8c5f7203cda7298b12eb44c65e6dffe" id=f9f33919-b881-412b-b1f8-496a656635d5 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.612526576Z" level=info msg="Created container e5eedccd0532bd21c8d0856ee3c66a834438f0761ffd0de338aa2ea71ea6d357: kube-system/kube-apiserver-cert-options-904767/kube-apiserver" id=4bb95fdc-cae9-4b9d-a160-ae735a8002cd name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.613111262Z" level=info msg="Starting container: e5eedccd0532bd21c8d0856ee3c66a834438f0761ffd0de338aa2ea71ea6d357" id=705da577-6f7b-48f6-aa99-40e69181f9b7 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.614533546Z" level=info msg="Started container" PID=1478 containerID=1e236d0b94a4996aa1b7fdcdd7144c86d8c5f7203cda7298b12eb44c65e6dffe description=kube-system/kube-controller-manager-cert-options-904767/kube-controller-manager id=f9f33919-b881-412b-b1f8-496a656635d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3acfffa410e396ccc54c75b903b76f91f282e15f88dab9420e2a21da81f41743
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.623116560Z" level=info msg="Started container" PID=1471 containerID=e5eedccd0532bd21c8d0856ee3c66a834438f0761ffd0de338aa2ea71ea6d357 description=kube-system/kube-apiserver-cert-options-904767/kube-apiserver id=705da577-6f7b-48f6-aa99-40e69181f9b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=575468d4c69acb08fa20e4abb68565d366d70d0a4e08c4f5e1e53af701538a5b
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.623148466Z" level=info msg="Created container 8031fb765a6487adef0925e6326d0027f69abf03e7f857ee3d7c8191fe1a9c4d: kube-system/kube-scheduler-cert-options-904767/kube-scheduler" id=c5a279e5-35c1-4cd7-8209-c31c2ed22315 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.623846563Z" level=info msg="Starting container: 8031fb765a6487adef0925e6326d0027f69abf03e7f857ee3d7c8191fe1a9c4d" id=a924dd0f-4cd6-42a2-90c3-e5fcbe3990d9 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.630666028Z" level=info msg="Created container 2d0017cb45b94d430e43a159dd5a30b125824f2eb455b74b902fc5bd19f9debd: kube-system/etcd-cert-options-904767/etcd" id=52664fc6-bbd3-4e43-8336-22ca4fc56daa name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.631523272Z" level=info msg="Starting container: 2d0017cb45b94d430e43a159dd5a30b125824f2eb455b74b902fc5bd19f9debd" id=627ffdfa-beb8-49c3-98ee-ff9c32b56e8b name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.632577043Z" level=info msg="Started container" PID=1516 containerID=8031fb765a6487adef0925e6326d0027f69abf03e7f857ee3d7c8191fe1a9c4d description=kube-system/kube-scheduler-cert-options-904767/kube-scheduler id=a924dd0f-4cd6-42a2-90c3-e5fcbe3990d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05b5566d9fd6544db9b9201cae5834608f90f79c65f34ef6c49c799f609dcb7f
	Sep 16 11:07:08 cert-options-904767 crio[1041]: time="2024-09-16 11:07:08.694547907Z" level=info msg="Started container" PID=1509 containerID=2d0017cb45b94d430e43a159dd5a30b125824f2eb455b74b902fc5bd19f9debd description=kube-system/etcd-cert-options-904767/etcd id=627ffdfa-beb8-49c3-98ee-ff9c32b56e8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=99c0c727627f078c34416d25fa71ff027161d6996a6f356aa808e39e6fa369a0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8031fb765a648       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   8 seconds ago       Running             kube-scheduler            0                   05b5566d9fd65       kube-scheduler-cert-options-904767
	2d0017cb45b94       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      0                   99c0c727627f0       etcd-cert-options-904767
	1e236d0b94a49       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 seconds ago       Running             kube-controller-manager   0                   3acfffa410e39       kube-controller-manager-cert-options-904767
	e5eedccd0532b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 seconds ago       Running             kube-apiserver            0                   575468d4c69ac       kube-apiserver-cert-options-904767
	
	
	==> describe nodes <==
	Name:               cert-options-904767
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-options-904767
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=cert-options-904767
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_07_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:07:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-options-904767
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:07:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:07:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:07:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:07:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:07:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    cert-options-904767
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c4a4c35f6254c589a6da49800d0010d
	  System UUID:                2ba66eec-0b7c-4a98-9b6f-eda0dcf30fbe
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-cert-options-904767                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4s
	  kube-system                 kube-apiserver-cert-options-904767             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-cert-options-904767    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-cert-options-904767             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%)   0 (0%)
	  memory             100Mi (0%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age              From     Message
	  ----     ------                   ----             ----     -------
	  Normal   NodeHasSufficientMemory  9s (x8 over 9s)  kubelet  Node cert-options-904767 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet  Node cert-options-904767 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x7 over 9s)  kubelet  Node cert-options-904767 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s               kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s               kubelet  Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4s               kubelet  Node cert-options-904767 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s               kubelet  Node cert-options-904767 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s               kubelet  Node cert-options-904767 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 10:58] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000008] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000013] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000137] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +1.004052] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [2d0017cb45b94d430e43a159dd5a30b125824f2eb455b74b902fc5bd19f9debd] <==
	{"level":"info","ts":"2024-09-16T11:07:08.733960Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:07:08.734123Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:07:08.734206Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:07:08.734243Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:07:08.734309Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:07:09.624029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:07:09.624102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:07:09.624120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-09-16T11:07:09.624131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:07:09.624137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:07:09.624146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:07:09.624153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:07:09.625176Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:cert-options-904767 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:07:09.625198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:07:09.625213Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:07:09.625173Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:07:09.625429Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:07:09.625588Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:07:09.626171Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:07:09.626295Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:07:09.626332Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:07:09.626616Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:07:09.626859Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:07:09.628454Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-09-16T11:07:09.628905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:07:17 up 49 min,  0 users,  load average: 3.50, 2.94, 1.81
	Linux cert-options-904767 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [e5eedccd0532bd21c8d0856ee3c66a834438f0761ffd0de338aa2ea71ea6d357] <==
	I0916 11:07:11.012620       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:07:11.012656       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:07:11.012664       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:07:11.014500       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:07:11.015201       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:07:11.015283       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:07:11.015319       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:07:11.015350       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:07:11.015381       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:07:11.017089       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:07:11.017151       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:07:11.213038       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:07:11.927300       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:07:11.932874       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:07:11.932898       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:07:12.429593       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:07:12.469885       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:07:12.529325       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:07:12.536399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0916 11:07:12.537674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:07:12.542163       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:07:12.947962       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:07:13.346378       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:07:13.359884       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:07:13.370623       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [1e236d0b94a4996aa1b7fdcdd7144c86d8c5f7203cda7298b12eb44c65e6dffe] <==
	I0916 11:07:16.045215       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0916 11:07:16.045260       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0916 11:07:16.045276       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0916 11:07:16.197253       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I0916 11:07:16.197481       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0916 11:07:16.197504       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0916 11:07:16.450769       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I0916 11:07:16.450832       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0916 11:07:16.450843       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0916 11:07:16.596309       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I0916 11:07:16.596427       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0916 11:07:16.596452       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0916 11:07:16.644619       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0916 11:07:16.644675       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	E0916 11:07:16.694814       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0916 11:07:16.694839       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0916 11:07:16.694850       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0916 11:07:16.847094       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I0916 11:07:16.847175       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0916 11:07:16.847189       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0916 11:07:16.996756       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I0916 11:07:16.996851       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0916 11:07:17.146243       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I0916 11:07:17.146367       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0916 11:07:17.146386       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	
	
	==> kube-scheduler [8031fb765a6487adef0925e6326d0027f69abf03e7f857ee3d7c8191fe1a9c4d] <==
	W0916 11:07:11.094906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:07:11.094927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:11.863221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:07:11.863263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:11.955385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:07:11.955429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.012039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:07:12.012083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.032132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:07:12.032180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.111698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:07:12.111755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.112606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:07:12.112681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.115610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:07:12.115669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.173618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:07:12.173664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.191284       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:07:12.191341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.226837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:07:12.226886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:07:12.253766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:07:12.253815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 11:07:12.631568       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.416760    1657 kubelet_node_status.go:72] "Attempting to register node" node="cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.416766    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71df7dd0113638e194f85f9b2c1810ab-kubeconfig\") pod \"kube-scheduler-cert-options-904767\" (UID: \"71df7dd0113638e194f85f9b2c1810ab\") " pod="kube-system/kube-scheduler-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.416938    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4029447310ae81d0b0b70ddbccaab2c1-etc-ca-certificates\") pod \"kube-apiserver-cert-options-904767\" (UID: \"4029447310ae81d0b0b70ddbccaab2c1\") " pod="kube-system/kube-apiserver-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417015    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9cce8ae844bfdc843bb21c51729a6664-usr-local-share-ca-certificates\") pod \"kube-controller-manager-cert-options-904767\" (UID: \"9cce8ae844bfdc843bb21c51729a6664\") " pod="kube-system/kube-controller-manager-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417047    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/bf635b1131d71d45f48ce4c45a03ac1e-etcd-data\") pod \"etcd-cert-options-904767\" (UID: \"bf635b1131d71d45f48ce4c45a03ac1e\") " pod="kube-system/etcd-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417074    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4029447310ae81d0b0b70ddbccaab2c1-ca-certs\") pod \"kube-apiserver-cert-options-904767\" (UID: \"4029447310ae81d0b0b70ddbccaab2c1\") " pod="kube-system/kube-apiserver-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417118    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4029447310ae81d0b0b70ddbccaab2c1-usr-share-ca-certificates\") pod \"kube-apiserver-cert-options-904767\" (UID: \"4029447310ae81d0b0b70ddbccaab2c1\") " pod="kube-system/kube-apiserver-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417146    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9cce8ae844bfdc843bb21c51729a6664-ca-certs\") pod \"kube-controller-manager-cert-options-904767\" (UID: \"9cce8ae844bfdc843bb21c51729a6664\") " pod="kube-system/kube-controller-manager-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417174    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4029447310ae81d0b0b70ddbccaab2c1-usr-local-share-ca-certificates\") pod \"kube-apiserver-cert-options-904767\" (UID: \"4029447310ae81d0b0b70ddbccaab2c1\") " pod="kube-system/kube-apiserver-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417198    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9cce8ae844bfdc843bb21c51729a6664-k8s-certs\") pod \"kube-controller-manager-cert-options-904767\" (UID: \"9cce8ae844bfdc843bb21c51729a6664\") " pod="kube-system/kube-controller-manager-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417229    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/bf635b1131d71d45f48ce4c45a03ac1e-etcd-certs\") pod \"etcd-cert-options-904767\" (UID: \"bf635b1131d71d45f48ce4c45a03ac1e\") " pod="kube-system/etcd-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417278    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4029447310ae81d0b0b70ddbccaab2c1-k8s-certs\") pod \"kube-apiserver-cert-options-904767\" (UID: \"4029447310ae81d0b0b70ddbccaab2c1\") " pod="kube-system/kube-apiserver-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417322    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9cce8ae844bfdc843bb21c51729a6664-etc-ca-certificates\") pod \"kube-controller-manager-cert-options-904767\" (UID: \"9cce8ae844bfdc843bb21c51729a6664\") " pod="kube-system/kube-controller-manager-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417396    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9cce8ae844bfdc843bb21c51729a6664-flexvolume-dir\") pod \"kube-controller-manager-cert-options-904767\" (UID: \"9cce8ae844bfdc843bb21c51729a6664\") " pod="kube-system/kube-controller-manager-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417431    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9cce8ae844bfdc843bb21c51729a6664-kubeconfig\") pod \"kube-controller-manager-cert-options-904767\" (UID: \"9cce8ae844bfdc843bb21c51729a6664\") " pod="kube-system/kube-controller-manager-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.417455    1657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9cce8ae844bfdc843bb21c51729a6664-usr-share-ca-certificates\") pod \"kube-controller-manager-cert-options-904767\" (UID: \"9cce8ae844bfdc843bb21c51729a6664\") " pod="kube-system/kube-controller-manager-cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.425205    1657 kubelet_node_status.go:111] "Node was previously registered" node="cert-options-904767"
	Sep 16 11:07:13 cert-options-904767 kubelet[1657]: I0916 11:07:13.425306    1657 kubelet_node_status.go:75] "Successfully registered node" node="cert-options-904767"
	Sep 16 11:07:14 cert-options-904767 kubelet[1657]: I0916 11:07:14.207238    1657 apiserver.go:52] "Watching apiserver"
	Sep 16 11:07:14 cert-options-904767 kubelet[1657]: I0916 11:07:14.214772    1657 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 11:07:14 cert-options-904767 kubelet[1657]: E0916 11:07:14.312436    1657 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-cert-options-904767\" already exists" pod="kube-system/etcd-cert-options-904767"
	Sep 16 11:07:14 cert-options-904767 kubelet[1657]: I0916 11:07:14.349218    1657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-cert-options-904767" podStartSLOduration=1.349184765 podStartE2EDuration="1.349184765s" podCreationTimestamp="2024-09-16 11:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:07:14.332399174 +0000 UTC m=+1.187734184" watchObservedRunningTime="2024-09-16 11:07:14.349184765 +0000 UTC m=+1.204519773"
	Sep 16 11:07:14 cert-options-904767 kubelet[1657]: I0916 11:07:14.399705    1657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-cert-options-904767" podStartSLOduration=1.399682218 podStartE2EDuration="1.399682218s" podCreationTimestamp="2024-09-16 11:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:07:14.354076121 +0000 UTC m=+1.209411127" watchObservedRunningTime="2024-09-16 11:07:14.399682218 +0000 UTC m=+1.255017228"
	Sep 16 11:07:14 cert-options-904767 kubelet[1657]: I0916 11:07:14.415333    1657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-cert-options-904767" podStartSLOduration=1.415307897 podStartE2EDuration="1.415307897s" podCreationTimestamp="2024-09-16 11:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:07:14.40024138 +0000 UTC m=+1.255576390" watchObservedRunningTime="2024-09-16 11:07:14.415307897 +0000 UTC m=+1.270642907"
	Sep 16 11:07:14 cert-options-904767 kubelet[1657]: I0916 11:07:14.429981    1657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-cert-options-904767" podStartSLOduration=1.429953149 podStartE2EDuration="1.429953149s" podCreationTimestamp="2024-09-16 11:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:07:14.416051125 +0000 UTC m=+1.271386134" watchObservedRunningTime="2024-09-16 11:07:14.429953149 +0000 UTC m=+1.285288158"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-options-904767 -n cert-options-904767
helpers_test.go:261: (dbg) Run:  kubectl --context cert-options-904767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context cert-options-904767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (515.46µs)
helpers_test.go:263: kubectl --context cert-options-904767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:175: Cleaning up "cert-options-904767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-904767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-904767: (1.881255635s)
--- FAIL: TestCertOptions (25.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (2.35s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: fork/exec /usr/local/bin/kubectl: exec format error (505.303µs)
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:687: expected current-context = "functional-546931", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/serial/KubeContext FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubeContext]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.558299714s)
helpers_test.go:252: TestFunctional/serial/KubeContext logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | enable headlamp                | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781               |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| addons  | addons-821781 addons disable   | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | headlamp --alsologtostderr     |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-821781 addons disable   | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-821781 addons           | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop    | -p addons-821781               | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| addons  | enable dashboard -p            | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-821781                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-821781                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-821781                  |                   |         |         |                     |                     |
	| delete  | -p addons-821781               | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| start   | -p nospam-530798 -n=1          | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-530798   |                   |         |         |                     |                     |
	|         | --driver=docker                |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | /tmp/nospam-530798 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | /tmp/nospam-530798 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | /tmp/nospam-530798 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 pause       |                   |         |         |                     |                     |
	| pause   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 pause       |                   |         |         |                     |                     |
	| pause   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 pause       |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop        |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop        |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-530798               | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	| start   | -p functional-546931           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-546931           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:34 UTC |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:33:40
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:33:40.770875   38254 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:33:40.771214   38254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:33:40.771225   38254 out.go:358] Setting ErrFile to fd 2...
	I0916 10:33:40.771229   38254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:33:40.771468   38254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:33:40.772058   38254 out.go:352] Setting JSON to false
	I0916 10:33:40.772994   38254 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":961,"bootTime":1726481860,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:33:40.773092   38254 start.go:139] virtualization: kvm guest
	I0916 10:33:40.775582   38254 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:33:40.776810   38254 notify.go:220] Checking for updates...
	I0916 10:33:40.776824   38254 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:33:40.778328   38254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:33:40.779827   38254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:40.781225   38254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:33:40.782854   38254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:33:40.784657   38254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:33:40.787127   38254 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:33:40.787260   38254 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:33:40.811874   38254 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:33:40.812025   38254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:33:40.868273   38254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:33:40.858814631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:33:40.868372   38254 docker.go:318] overlay module found
	I0916 10:33:40.870598   38254 out.go:177] * Using the docker driver based on existing profile
	I0916 10:33:40.872000   38254 start.go:297] selected driver: docker
	I0916 10:33:40.872020   38254 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:33:40.872110   38254 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:33:40.872236   38254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:33:40.926447   38254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:33:40.915860884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:33:40.927025   38254 cni.go:84] Creating CNI manager for ""
	I0916 10:33:40.927063   38254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:33:40.927104   38254 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:33:40.929251   38254 out.go:177] * Starting "functional-546931" primary control-plane node in "functional-546931" cluster
	I0916 10:33:40.930726   38254 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:33:40.932156   38254 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:33:40.933438   38254 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:33:40.933468   38254 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:33:40.933483   38254 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:33:40.933499   38254 cache.go:56] Caching tarball of preloaded images
	I0916 10:33:40.933594   38254 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:33:40.933606   38254 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:33:40.933720   38254 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/config.json ...
	W0916 10:33:40.954493   38254 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:33:40.954521   38254 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:33:40.954610   38254 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:33:40.954627   38254 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:33:40.954631   38254 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:33:40.954639   38254 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:33:40.954646   38254 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:33:40.956035   38254 image.go:273] response: 
	I0916 10:33:41.014396   38254 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:33:41.014445   38254 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:33:41.014478   38254 start.go:360] acquireMachinesLock for functional-546931: {Name:mk0ba09111db367b90aa515f201f345e63335cec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:33:41.014562   38254 start.go:364] duration metric: took 44.876µs to acquireMachinesLock for "functional-546931"
	I0916 10:33:41.014581   38254 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:33:41.014588   38254 fix.go:54] fixHost starting: 
	I0916 10:33:41.014788   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:41.032464   38254 fix.go:112] recreateIfNeeded on functional-546931: state=Running err=<nil>
	W0916 10:33:41.032501   38254 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:33:41.034913   38254 out.go:177] * Updating the running docker "functional-546931" container ...
	I0916 10:33:41.036263   38254 machine.go:93] provisionDockerMachine start ...
	I0916 10:33:41.036349   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.055346   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.055594   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.055611   38254 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:33:41.192774   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546931
	
	I0916 10:33:41.192811   38254 ubuntu.go:169] provisioning hostname "functional-546931"
	I0916 10:33:41.192875   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.211900   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.212128   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.212148   38254 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546931 && echo "functional-546931" | sudo tee /etc/hostname
	I0916 10:33:41.360228   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546931
	
	I0916 10:33:41.360314   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.377015   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.377240   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.377259   38254 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546931/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:33:41.509419   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:33:41.509453   38254 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:33:41.509476   38254 ubuntu.go:177] setting up certificates
	I0916 10:33:41.509484   38254 provision.go:84] configureAuth start
	I0916 10:33:41.509533   38254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546931
	I0916 10:33:41.527045   38254 provision.go:143] copyHostCerts
	I0916 10:33:41.527081   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:33:41.527116   38254 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:33:41.527126   38254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:33:41.527187   38254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:33:41.527269   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:33:41.527294   38254 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:33:41.527304   38254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:33:41.527343   38254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:33:41.527399   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:33:41.527417   38254 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:33:41.527424   38254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:33:41.527446   38254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:33:41.527495   38254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.functional-546931 san=[127.0.0.1 192.168.49.2 functional-546931 localhost minikube]
	I0916 10:33:41.723877   38254 provision.go:177] copyRemoteCerts
	I0916 10:33:41.723943   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:33:41.723990   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.742923   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:41.842009   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:33:41.842070   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:33:41.863475   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:33:41.863546   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:33:41.885728   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:33:41.885808   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:33:41.908294   38254 provision.go:87] duration metric: took 398.792469ms to configureAuth
	I0916 10:33:41.908321   38254 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:33:41.908487   38254 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:33:41.908581   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.926776   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.926981   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.926998   38254 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:33:47.267116   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:33:47.267143   38254 machine.go:96] duration metric: took 6.230864456s to provisionDockerMachine
	I0916 10:33:47.267157   38254 start.go:293] postStartSetup for "functional-546931" (driver="docker")
	I0916 10:33:47.267171   38254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:33:47.267223   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:33:47.267257   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.284010   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.377932   38254 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:33:47.380909   38254 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:33:47.380929   38254 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:33:47.380936   38254 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:33:47.380944   38254 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:33:47.380950   38254 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:33:47.380956   38254 command_runner.go:130] > ID=ubuntu
	I0916 10:33:47.380961   38254 command_runner.go:130] > ID_LIKE=debian
	I0916 10:33:47.380968   38254 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:33:47.380977   38254 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:33:47.380987   38254 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:33:47.381000   38254 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:33:47.381006   38254 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:33:47.381061   38254 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:33:47.381093   38254 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:33:47.381106   38254 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:33:47.381118   38254 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:33:47.381131   38254 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:33:47.381194   38254 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:33:47.381292   38254 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:33:47.381305   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:33:47.381411   38254 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts -> hosts in /etc/test/nested/copy/11208
	I0916 10:33:47.381419   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts -> /etc/test/nested/copy/11208/hosts
	I0916 10:33:47.381467   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11208
	I0916 10:33:47.389827   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:33:47.411941   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts --> /etc/test/nested/copy/11208/hosts (40 bytes)
	I0916 10:33:47.433973   38254 start.go:296] duration metric: took 166.799134ms for postStartSetup
	I0916 10:33:47.434042   38254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:33:47.434075   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.451092   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.542209   38254 command_runner.go:130] > 30%
	I0916 10:33:47.542290   38254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:33:47.546531   38254 command_runner.go:130] > 205G
	I0916 10:33:47.546731   38254 fix.go:56] duration metric: took 6.5321272s for fixHost
	I0916 10:33:47.546753   38254 start.go:83] releasing machines lock for "functional-546931", held for 6.53217868s
	I0916 10:33:47.546819   38254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546931
	I0916 10:33:47.563606   38254 ssh_runner.go:195] Run: cat /version.json
	I0916 10:33:47.563637   38254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:33:47.563674   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.563716   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.581622   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.582240   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.676950   38254 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:33:47.677144   38254 ssh_runner.go:195] Run: systemctl --version
	I0916 10:33:47.751671   38254 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:33:47.751721   38254 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:33:47.751745   38254 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:33:47.751805   38254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:33:47.889831   38254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:33:47.894036   38254 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf.mk_disabled
	I0916 10:33:47.894064   38254 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:33:47.894074   38254 command_runner.go:130] > Device: 37h/55d	Inode: 535096      Links: 1
	I0916 10:33:47.894083   38254 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:33:47.894089   38254 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:33:47.894094   38254 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:33:47.894099   38254 command_runner.go:130] > Change: 2024-09-16 10:33:10.369617623 +0000
	I0916 10:33:47.894104   38254 command_runner.go:130] >  Birth: 2024-09-16 10:33:10.369617623 +0000
	I0916 10:33:47.894157   38254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:33:47.902355   38254 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:33:47.902411   38254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:33:47.910389   38254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:33:47.910416   38254 start.go:495] detecting cgroup driver to use...
	I0916 10:33:47.910444   38254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:33:47.910486   38254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:33:47.921885   38254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:33:47.932184   38254 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:33:47.932238   38254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:33:47.944255   38254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:33:47.954927   38254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:33:48.063649   38254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:33:48.173240   38254 docker.go:233] disabling docker service ...
	I0916 10:33:48.173304   38254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:33:48.185048   38254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:33:48.195758   38254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:33:48.304682   38254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:33:48.409454   38254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:33:48.420073   38254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:33:48.434731   38254 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:33:48.434777   38254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:33:48.434822   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.443602   38254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:33:48.443670   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.452457   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.461402   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.470379   38254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:33:48.479040   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.487789   38254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.496160   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.504870   38254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:33:48.511765   38254 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:33:48.512422   38254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:33:48.520403   38254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:33:48.628460   38254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:33:48.760479   38254 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:33:48.760539   38254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:33:48.764057   38254 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:33:48.764081   38254 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:33:48.764090   38254 command_runner.go:130] > Device: 40h/64d	Inode: 556         Links: 1
	I0916 10:33:48.764100   38254 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:33:48.764107   38254 command_runner.go:130] > Access: 2024-09-16 10:33:48.724442048 +0000
	I0916 10:33:48.764121   38254 command_runner.go:130] > Modify: 2024-09-16 10:33:48.724442048 +0000
	I0916 10:33:48.764134   38254 command_runner.go:130] > Change: 2024-09-16 10:33:48.724442048 +0000
	I0916 10:33:48.764144   38254 command_runner.go:130] >  Birth: -
	I0916 10:33:48.764168   38254 start.go:563] Will wait 60s for crictl version
	I0916 10:33:48.764206   38254 ssh_runner.go:195] Run: which crictl
	I0916 10:33:48.767272   38254 command_runner.go:130] > /usr/bin/crictl
	I0916 10:33:48.767358   38254 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:33:48.798589   38254 command_runner.go:130] > Version:  0.1.0
	I0916 10:33:48.798608   38254 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:33:48.798619   38254 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:33:48.798625   38254 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:33:48.800498   38254 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:33:48.800571   38254 ssh_runner.go:195] Run: crio --version
	I0916 10:33:48.833121   38254 command_runner.go:130] > crio version 1.24.6
	I0916 10:33:48.833142   38254 command_runner.go:130] > Version:          1.24.6
	I0916 10:33:48.833150   38254 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:33:48.833154   38254 command_runner.go:130] > GitTreeState:     clean
	I0916 10:33:48.833160   38254 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:33:48.833165   38254 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:33:48.833170   38254 command_runner.go:130] > Compiler:         gc
	I0916 10:33:48.833174   38254 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:33:48.833179   38254 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:33:48.833186   38254 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:33:48.833190   38254 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:33:48.833199   38254 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:33:48.834514   38254 ssh_runner.go:195] Run: crio --version
	I0916 10:33:48.867161   38254 command_runner.go:130] > crio version 1.24.6
	I0916 10:33:48.867194   38254 command_runner.go:130] > Version:          1.24.6
	I0916 10:33:48.867202   38254 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:33:48.867206   38254 command_runner.go:130] > GitTreeState:     clean
	I0916 10:33:48.867212   38254 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:33:48.867216   38254 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:33:48.867220   38254 command_runner.go:130] > Compiler:         gc
	I0916 10:33:48.867225   38254 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:33:48.867230   38254 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:33:48.867237   38254 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:33:48.867244   38254 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:33:48.867249   38254 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:33:48.870738   38254 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:33:48.872074   38254 cli_runner.go:164] Run: docker network inspect functional-546931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:33:48.888862   38254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:33:48.892499   38254 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0916 10:33:48.892597   38254 kubeadm.go:883] updating cluster {Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:33:48.892702   38254 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:33:48.892742   38254 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:33:48.927357   38254 command_runner.go:130] > {
	I0916 10:33:48.927387   38254 command_runner.go:130] >   "images": [
	I0916 10:33:48.927392   38254 command_runner.go:130] >     {
	I0916 10:33:48.927400   38254 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:33:48.927405   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927411   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:33:48.927415   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927419   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927428   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:33:48.927435   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:33:48.927439   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927443   38254 command_runner.go:130] >       "size": "87190579",
	I0916 10:33:48.927447   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927451   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927460   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927464   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927468   38254 command_runner.go:130] >     },
	I0916 10:33:48.927471   38254 command_runner.go:130] >     {
	I0916 10:33:48.927477   38254 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:33:48.927484   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927490   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:33:48.927494   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927497   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927505   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:33:48.927520   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:33:48.927523   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927530   38254 command_runner.go:130] >       "size": "31470524",
	I0916 10:33:48.927536   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927541   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927547   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927551   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927555   38254 command_runner.go:130] >     },
	I0916 10:33:48.927560   38254 command_runner.go:130] >     {
	I0916 10:33:48.927568   38254 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:33:48.927572   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927580   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:33:48.927583   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927587   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927595   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:33:48.927604   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:33:48.927608   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927612   38254 command_runner.go:130] >       "size": "63273227",
	I0916 10:33:48.927618   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927622   38254 command_runner.go:130] >       "username": "nonroot",
	I0916 10:33:48.927628   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927632   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927635   38254 command_runner.go:130] >     },
	I0916 10:33:48.927639   38254 command_runner.go:130] >     {
	I0916 10:33:48.927644   38254 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:33:48.927649   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927654   38254 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:33:48.927664   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927669   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927675   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:33:48.927686   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:33:48.927692   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927696   38254 command_runner.go:130] >       "size": "149009664",
	I0916 10:33:48.927702   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.927706   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.927711   38254 command_runner.go:130] >       },
	I0916 10:33:48.927715   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927719   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927723   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927727   38254 command_runner.go:130] >     },
	I0916 10:33:48.927730   38254 command_runner.go:130] >     {
	I0916 10:33:48.927737   38254 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:33:48.927743   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927748   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:33:48.927752   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927756   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927763   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:33:48.927774   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:33:48.927781   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927785   38254 command_runner.go:130] >       "size": "95237600",
	I0916 10:33:48.927791   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.927794   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.927798   38254 command_runner.go:130] >       },
	I0916 10:33:48.927802   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927808   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927812   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927815   38254 command_runner.go:130] >     },
	I0916 10:33:48.927818   38254 command_runner.go:130] >     {
	I0916 10:33:48.927824   38254 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:33:48.927830   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927835   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:33:48.927839   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927843   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927851   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:33:48.927861   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:33:48.927866   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927870   38254 command_runner.go:130] >       "size": "89437508",
	I0916 10:33:48.927875   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.927880   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.927884   38254 command_runner.go:130] >       },
	I0916 10:33:48.927887   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927891   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927897   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927901   38254 command_runner.go:130] >     },
	I0916 10:33:48.927905   38254 command_runner.go:130] >     {
	I0916 10:33:48.927913   38254 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:33:48.927920   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927925   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:33:48.927928   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927933   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927940   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:33:48.927949   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:33:48.927953   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927957   38254 command_runner.go:130] >       "size": "92733849",
	I0916 10:33:48.927963   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927967   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927973   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927977   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927980   38254 command_runner.go:130] >     },
	I0916 10:33:48.927987   38254 command_runner.go:130] >     {
	I0916 10:33:48.927993   38254 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:33:48.927998   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.928003   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:33:48.928008   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928011   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.928028   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:33:48.928038   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:33:48.928041   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928045   38254 command_runner.go:130] >       "size": "68420934",
	I0916 10:33:48.928049   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.928053   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.928056   38254 command_runner.go:130] >       },
	I0916 10:33:48.928061   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.928064   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.928070   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.928073   38254 command_runner.go:130] >     },
	I0916 10:33:48.928079   38254 command_runner.go:130] >     {
	I0916 10:33:48.928087   38254 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:33:48.928093   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.928098   38254 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:33:48.928101   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928105   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.928112   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:33:48.928124   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:33:48.928128   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928152   38254 command_runner.go:130] >       "size": "742080",
	I0916 10:33:48.928163   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.928167   38254 command_runner.go:130] >         "value": "65535"
	I0916 10:33:48.928170   38254 command_runner.go:130] >       },
	I0916 10:33:48.928174   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.928178   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.928182   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.928185   38254 command_runner.go:130] >     }
	I0916 10:33:48.928188   38254 command_runner.go:130] >   ]
	I0916 10:33:48.928191   38254 command_runner.go:130] > }
	I0916 10:33:48.929512   38254 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:33:48.929532   38254 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:33:48.929573   38254 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:33:48.959189   38254 command_runner.go:130] > {
	I0916 10:33:48.959209   38254 command_runner.go:130] >   "images": [
	I0916 10:33:48.959213   38254 command_runner.go:130] >     {
	I0916 10:33:48.959222   38254 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:33:48.959227   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959233   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:33:48.959240   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959243   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959252   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:33:48.959259   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:33:48.959265   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959270   38254 command_runner.go:130] >       "size": "87190579",
	I0916 10:33:48.959277   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959281   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959286   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959290   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959296   38254 command_runner.go:130] >     },
	I0916 10:33:48.959299   38254 command_runner.go:130] >     {
	I0916 10:33:48.959305   38254 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:33:48.959312   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959317   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:33:48.959324   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959328   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959335   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:33:48.959344   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:33:48.959348   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959357   38254 command_runner.go:130] >       "size": "31470524",
	I0916 10:33:48.959363   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959382   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959391   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959395   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959399   38254 command_runner.go:130] >     },
	I0916 10:33:48.959403   38254 command_runner.go:130] >     {
	I0916 10:33:48.959410   38254 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:33:48.959414   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959419   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:33:48.959425   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959428   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959435   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:33:48.959455   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:33:48.959461   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959465   38254 command_runner.go:130] >       "size": "63273227",
	I0916 10:33:48.959469   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959474   38254 command_runner.go:130] >       "username": "nonroot",
	I0916 10:33:48.959478   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959482   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959485   38254 command_runner.go:130] >     },
	I0916 10:33:48.959489   38254 command_runner.go:130] >     {
	I0916 10:33:48.959495   38254 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:33:48.959500   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959506   38254 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:33:48.959511   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959515   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959521   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:33:48.959534   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:33:48.959538   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959542   38254 command_runner.go:130] >       "size": "149009664",
	I0916 10:33:48.959546   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959550   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959553   38254 command_runner.go:130] >       },
	I0916 10:33:48.959559   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959564   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959577   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959583   38254 command_runner.go:130] >     },
	I0916 10:33:48.959586   38254 command_runner.go:130] >     {
	I0916 10:33:48.959592   38254 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:33:48.959598   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959603   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:33:48.959609   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959614   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959623   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:33:48.959631   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:33:48.959636   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959641   38254 command_runner.go:130] >       "size": "95237600",
	I0916 10:33:48.959645   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959649   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959652   38254 command_runner.go:130] >       },
	I0916 10:33:48.959656   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959660   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959663   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959667   38254 command_runner.go:130] >     },
	I0916 10:33:48.959672   38254 command_runner.go:130] >     {
	I0916 10:33:48.959678   38254 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:33:48.959682   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959687   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:33:48.959690   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959694   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959701   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:33:48.959708   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:33:48.959711   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959722   38254 command_runner.go:130] >       "size": "89437508",
	I0916 10:33:48.959725   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959729   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959732   38254 command_runner.go:130] >       },
	I0916 10:33:48.959737   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959740   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959744   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959747   38254 command_runner.go:130] >     },
	I0916 10:33:48.959750   38254 command_runner.go:130] >     {
	I0916 10:33:48.959756   38254 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:33:48.959761   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959766   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:33:48.959772   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959776   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959786   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:33:48.959794   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:33:48.959799   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959804   38254 command_runner.go:130] >       "size": "92733849",
	I0916 10:33:48.959810   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959814   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959818   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959822   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959826   38254 command_runner.go:130] >     },
	I0916 10:33:48.959829   38254 command_runner.go:130] >     {
	I0916 10:33:48.959835   38254 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:33:48.959841   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959846   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:33:48.959850   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959854   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959870   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:33:48.959880   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:33:48.959883   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959887   38254 command_runner.go:130] >       "size": "68420934",
	I0916 10:33:48.959891   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959898   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959901   38254 command_runner.go:130] >       },
	I0916 10:33:48.959922   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959929   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959933   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959937   38254 command_runner.go:130] >     },
	I0916 10:33:48.959941   38254 command_runner.go:130] >     {
	I0916 10:33:48.959947   38254 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:33:48.959953   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959958   38254 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:33:48.959964   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959969   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959976   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:33:48.959985   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:33:48.959988   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959992   38254 command_runner.go:130] >       "size": "742080",
	I0916 10:33:48.959996   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.960000   38254 command_runner.go:130] >         "value": "65535"
	I0916 10:33:48.960003   38254 command_runner.go:130] >       },
	I0916 10:33:48.960007   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.960014   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.960019   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.960022   38254 command_runner.go:130] >     }
	I0916 10:33:48.960025   38254 command_runner.go:130] >   ]
	I0916 10:33:48.960029   38254 command_runner.go:130] > }
	I0916 10:33:48.961474   38254 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:33:48.961496   38254 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:33:48.961506   38254 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 crio true true} ...
	I0916 10:33:48.961618   38254 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-546931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:33:48.961707   38254 ssh_runner.go:195] Run: crio config
	I0916 10:33:48.997137   38254 command_runner.go:130] ! time="2024-09-16 10:33:48.996693989Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0916 10:33:48.997172   38254 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:33:49.002096   38254 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:33:49.002120   38254 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:33:49.002130   38254 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:33:49.002135   38254 command_runner.go:130] > #
	I0916 10:33:49.002146   38254 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:33:49.002155   38254 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:33:49.002163   38254 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:33:49.002175   38254 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:33:49.002182   38254 command_runner.go:130] > # reload'.
	I0916 10:33:49.002196   38254 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:33:49.002210   38254 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:33:49.002221   38254 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:33:49.002234   38254 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:33:49.002243   38254 command_runner.go:130] > [crio]
	I0916 10:33:49.002255   38254 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:33:49.002266   38254 command_runner.go:130] > # containers images, in this directory.
	I0916 10:33:49.002277   38254 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0916 10:33:49.002286   38254 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:33:49.002293   38254 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0916 10:33:49.002302   38254 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:33:49.002317   38254 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:33:49.002324   38254 command_runner.go:130] > # storage_driver = "vfs"
	I0916 10:33:49.002337   38254 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:33:49.002347   38254 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:33:49.002356   38254 command_runner.go:130] > # storage_option = [
	I0916 10:33:49.002363   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002376   38254 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:33:49.002390   38254 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:33:49.002395   38254 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:33:49.002403   38254 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:33:49.002410   38254 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:33:49.002416   38254 command_runner.go:130] > # always happen on a node reboot
	I0916 10:33:49.002421   38254 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:33:49.002427   38254 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:33:49.002440   38254 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:33:49.002451   38254 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:33:49.002459   38254 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0916 10:33:49.002474   38254 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:33:49.002489   38254 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:33:49.002497   38254 command_runner.go:130] > # internal_wipe = true
	I0916 10:33:49.002510   38254 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:33:49.002520   38254 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:33:49.002528   38254 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:33:49.002535   38254 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:33:49.002541   38254 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:33:49.002548   38254 command_runner.go:130] > [crio.api]
	I0916 10:33:49.002554   38254 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:33:49.002559   38254 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:33:49.002567   38254 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:33:49.002571   38254 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:33:49.002578   38254 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:33:49.002585   38254 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:33:49.002589   38254 command_runner.go:130] > # stream_port = "0"
	I0916 10:33:49.002597   38254 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:33:49.002601   38254 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:33:49.002609   38254 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:33:49.002613   38254 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:33:49.002619   38254 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:33:49.002627   38254 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:33:49.002636   38254 command_runner.go:130] > # minutes.
	I0916 10:33:49.002640   38254 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:33:49.002648   38254 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:33:49.002654   38254 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:33:49.002658   38254 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:33:49.002664   38254 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:33:49.002670   38254 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:33:49.002676   38254 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:33:49.002680   38254 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:33:49.002688   38254 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:33:49.002694   38254 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0916 10:33:49.002701   38254 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:33:49.002708   38254 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0916 10:33:49.002724   38254 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:33:49.002732   38254 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:33:49.002736   38254 command_runner.go:130] > [crio.runtime]
	I0916 10:33:49.002742   38254 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:33:49.002749   38254 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:33:49.002753   38254 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:33:49.002760   38254 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:33:49.002766   38254 command_runner.go:130] > # default_ulimits = [
	I0916 10:33:49.002769   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002775   38254 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:33:49.002781   38254 command_runner.go:130] > # no_pivot = false
	I0916 10:33:49.002787   38254 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:33:49.002798   38254 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:33:49.002803   38254 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:33:49.002811   38254 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:33:49.002816   38254 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:33:49.002825   38254 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:33:49.002829   38254 command_runner.go:130] > # conmon = ""
	I0916 10:33:49.002836   38254 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:33:49.002842   38254 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:33:49.002849   38254 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:33:49.002855   38254 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:33:49.002860   38254 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:33:49.002869   38254 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:33:49.002873   38254 command_runner.go:130] > # conmon_env = [
	I0916 10:33:49.002885   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002893   38254 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:33:49.002898   38254 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:33:49.002905   38254 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:33:49.002908   38254 command_runner.go:130] > # default_env = [
	I0916 10:33:49.002912   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002917   38254 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:33:49.002923   38254 command_runner.go:130] > # selinux = false
	I0916 10:33:49.002930   38254 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:33:49.002939   38254 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:33:49.002947   38254 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:33:49.002951   38254 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:33:49.002956   38254 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:33:49.002964   38254 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:33:49.002970   38254 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:33:49.002977   38254 command_runner.go:130] > # which might increase security.
	I0916 10:33:49.002981   38254 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0916 10:33:49.002987   38254 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:33:49.002996   38254 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:33:49.003002   38254 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:33:49.003010   38254 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:33:49.003016   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.003023   38254 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:33:49.003030   38254 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:33:49.003037   38254 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:33:49.003041   38254 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:33:49.003047   38254 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:33:49.003053   38254 command_runner.go:130] > # irqbalance daemon.
	I0916 10:33:49.003058   38254 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:33:49.003066   38254 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:33:49.003073   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.003077   38254 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:33:49.003083   38254 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:33:49.003088   38254 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:33:49.003094   38254 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:33:49.003100   38254 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:33:49.003106   38254 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:33:49.003114   38254 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:33:49.003118   38254 command_runner.go:130] > # will be added.
	I0916 10:33:49.003124   38254 command_runner.go:130] > # default_capabilities = [
	I0916 10:33:49.003128   38254 command_runner.go:130] > # 	"CHOWN",
	I0916 10:33:49.003135   38254 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:33:49.003139   38254 command_runner.go:130] > # 	"FSETID",
	I0916 10:33:49.003142   38254 command_runner.go:130] > # 	"FOWNER",
	I0916 10:33:49.003146   38254 command_runner.go:130] > # 	"SETGID",
	I0916 10:33:49.003149   38254 command_runner.go:130] > # 	"SETUID",
	I0916 10:33:49.003153   38254 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:33:49.003157   38254 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:33:49.003163   38254 command_runner.go:130] > # 	"KILL",
	I0916 10:33:49.003166   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003173   38254 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:33:49.003182   38254 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:33:49.003186   38254 command_runner.go:130] > # add_inheritable_capabilities = true
	I0916 10:33:49.003192   38254 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:33:49.003199   38254 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:33:49.003203   38254 command_runner.go:130] > default_sysctls = [
	I0916 10:33:49.003208   38254 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:33:49.003212   38254 command_runner.go:130] > ]
	I0916 10:33:49.003217   38254 command_runner.go:130] > # List of devices on the host that a
	I0916 10:33:49.003225   38254 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:33:49.003229   38254 command_runner.go:130] > # allowed_devices = [
	I0916 10:33:49.003236   38254 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:33:49.003239   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003244   38254 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:33:49.003263   38254 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:33:49.003271   38254 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:33:49.003277   38254 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:33:49.003283   38254 command_runner.go:130] > # additional_devices = [
	I0916 10:33:49.003286   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003291   38254 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:33:49.003297   38254 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:33:49.003301   38254 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:33:49.003308   38254 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:33:49.003311   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003317   38254 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:33:49.003326   38254 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:33:49.003330   38254 command_runner.go:130] > # Defaults to false.
	I0916 10:33:49.003335   38254 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:33:49.003341   38254 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:33:49.003349   38254 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:33:49.003353   38254 command_runner.go:130] > # hooks_dir = [
	I0916 10:33:49.003359   38254 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:33:49.003362   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003368   38254 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:33:49.003376   38254 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:33:49.003382   38254 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:33:49.003387   38254 command_runner.go:130] > #
	I0916 10:33:49.003393   38254 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:33:49.003401   38254 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:33:49.003407   38254 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:33:49.003410   38254 command_runner.go:130] > #
	I0916 10:33:49.003416   38254 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:33:49.003424   38254 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:33:49.003430   38254 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:33:49.003437   38254 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:33:49.003441   38254 command_runner.go:130] > #
	I0916 10:33:49.003447   38254 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:33:49.003453   38254 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:33:49.003461   38254 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:33:49.003465   38254 command_runner.go:130] > # pids_limit = 0
	I0916 10:33:49.003471   38254 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:33:49.003479   38254 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:33:49.003485   38254 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:33:49.003495   38254 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:33:49.003501   38254 command_runner.go:130] > # log_size_max = -1
	I0916 10:33:49.003508   38254 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0916 10:33:49.003514   38254 command_runner.go:130] > # log_to_journald = false
	I0916 10:33:49.003520   38254 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:33:49.003528   38254 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:33:49.003533   38254 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:33:49.003540   38254 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:33:49.003545   38254 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:33:49.003551   38254 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:33:49.003557   38254 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:33:49.003563   38254 command_runner.go:130] > # read_only = false
	I0916 10:33:49.003569   38254 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:33:49.003577   38254 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:33:49.003581   38254 command_runner.go:130] > # live configuration reload.
	I0916 10:33:49.003587   38254 command_runner.go:130] > # log_level = "info"
	I0916 10:33:49.003593   38254 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:33:49.003600   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.003604   38254 command_runner.go:130] > # log_filter = ""
	I0916 10:33:49.003610   38254 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:33:49.003616   38254 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:33:49.003619   38254 command_runner.go:130] > # separated by comma.
	I0916 10:33:49.003624   38254 command_runner.go:130] > # uid_mappings = ""
	I0916 10:33:49.003630   38254 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:33:49.003643   38254 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:33:49.003650   38254 command_runner.go:130] > # separated by comma.
	I0916 10:33:49.003655   38254 command_runner.go:130] > # gid_mappings = ""
	I0916 10:33:49.003663   38254 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:33:49.003669   38254 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:33:49.003674   38254 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:33:49.003681   38254 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:33:49.003686   38254 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:33:49.003695   38254 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:33:49.003701   38254 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:33:49.003707   38254 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:33:49.003713   38254 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:33:49.003719   38254 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:33:49.003725   38254 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:33:49.003730   38254 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:33:49.003737   38254 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:33:49.003746   38254 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:33:49.003751   38254 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:33:49.003758   38254 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:33:49.003762   38254 command_runner.go:130] > # drop_infra_ctr = true
	I0916 10:33:49.003770   38254 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:33:49.003775   38254 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:33:49.003786   38254 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:33:49.003793   38254 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:33:49.003799   38254 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:33:49.003804   38254 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:33:49.003810   38254 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:33:49.003818   38254 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:33:49.003824   38254 command_runner.go:130] > # pinns_path = ""
	I0916 10:33:49.003831   38254 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:33:49.003839   38254 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0916 10:33:49.003846   38254 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0916 10:33:49.003853   38254 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:33:49.003858   38254 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:33:49.003867   38254 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:33:49.003882   38254 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0916 10:33:49.003889   38254 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:33:49.003898   38254 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:33:49.003905   38254 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:33:49.003910   38254 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:33:49.003913   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003919   38254 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:33:49.003928   38254 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:33:49.003935   38254 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0916 10:33:49.003941   38254 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0916 10:33:49.003945   38254 command_runner.go:130] > #
	I0916 10:33:49.003949   38254 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0916 10:33:49.003957   38254 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0916 10:33:49.003962   38254 command_runner.go:130] > #  runtime_type = "oci"
	I0916 10:33:49.003969   38254 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0916 10:33:49.003973   38254 command_runner.go:130] > #  privileged_without_host_devices = false
	I0916 10:33:49.003980   38254 command_runner.go:130] > #  allowed_annotations = []
	I0916 10:33:49.003983   38254 command_runner.go:130] > # Where:
	I0916 10:33:49.003988   38254 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0916 10:33:49.003997   38254 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0916 10:33:49.004003   38254 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:33:49.004012   38254 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:33:49.004016   38254 command_runner.go:130] > #   in $PATH.
	I0916 10:33:49.004022   38254 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0916 10:33:49.004029   38254 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:33:49.004034   38254 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0916 10:33:49.004040   38254 command_runner.go:130] > #   state.
	I0916 10:33:49.004046   38254 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:33:49.004054   38254 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:33:49.004061   38254 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:33:49.004068   38254 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:33:49.004075   38254 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:33:49.004084   38254 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:33:49.004089   38254 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:33:49.004097   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:33:49.004105   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:33:49.004112   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:33:49.004118   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:33:49.004127   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:33:49.004134   38254 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:33:49.004142   38254 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:33:49.004149   38254 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0916 10:33:49.004156   38254 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:33:49.004160   38254 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:33:49.004165   38254 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0916 10:33:49.004170   38254 command_runner.go:130] > runtime_type = "oci"
	I0916 10:33:49.004175   38254 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:33:49.004181   38254 command_runner.go:130] > runtime_config_path = ""
	I0916 10:33:49.004185   38254 command_runner.go:130] > monitor_path = ""
	I0916 10:33:49.004189   38254 command_runner.go:130] > monitor_cgroup = ""
	I0916 10:33:49.004195   38254 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:33:49.004219   38254 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0916 10:33:49.004225   38254 command_runner.go:130] > # running containers
	I0916 10:33:49.004229   38254 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0916 10:33:49.004238   38254 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0916 10:33:49.004244   38254 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0916 10:33:49.004252   38254 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0916 10:33:49.004257   38254 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0916 10:33:49.004262   38254 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0916 10:33:49.004269   38254 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0916 10:33:49.004273   38254 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0916 10:33:49.004281   38254 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0916 10:33:49.004285   38254 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0916 10:33:49.004293   38254 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:33:49.004298   38254 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:33:49.004306   38254 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:33:49.004313   38254 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:33:49.004322   38254 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:33:49.004328   38254 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:33:49.004339   38254 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:33:49.004349   38254 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:33:49.004354   38254 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:33:49.004363   38254 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:33:49.004367   38254 command_runner.go:130] > # Example:
	I0916 10:33:49.004376   38254 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:33:49.004380   38254 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:33:49.004387   38254 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:33:49.004392   38254 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:33:49.004398   38254 command_runner.go:130] > # cpuset = 0
	I0916 10:33:49.004402   38254 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:33:49.004408   38254 command_runner.go:130] > # Where:
	I0916 10:33:49.004412   38254 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:33:49.004419   38254 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:33:49.004426   38254 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:33:49.004431   38254 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:33:49.004441   38254 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:33:49.004447   38254 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:33:49.004450   38254 command_runner.go:130] > # 
	I0916 10:33:49.004456   38254 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:33:49.004462   38254 command_runner.go:130] > #
	I0916 10:33:49.004468   38254 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:33:49.004476   38254 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:33:49.004482   38254 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:33:49.004491   38254 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:33:49.004497   38254 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:33:49.004503   38254 command_runner.go:130] > [crio.image]
	I0916 10:33:49.004508   38254 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:33:49.004515   38254 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:33:49.004521   38254 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:33:49.004529   38254 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:33:49.004534   38254 command_runner.go:130] > # global_auth_file = ""
	I0916 10:33:49.004541   38254 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:33:49.004546   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.004553   38254 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:33:49.004560   38254 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:33:49.004568   38254 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:33:49.004573   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.004580   38254 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:33:49.004585   38254 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:33:49.004594   38254 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:33:49.004600   38254 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:33:49.004608   38254 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:33:49.004612   38254 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:33:49.004618   38254 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:33:49.004626   38254 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:33:49.004632   38254 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:33:49.004638   38254 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:33:49.004645   38254 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:33:49.004649   38254 command_runner.go:130] > # signature_policy = ""
	I0916 10:33:49.004660   38254 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:33:49.004666   38254 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:33:49.004671   38254 command_runner.go:130] > # changing them here.
	I0916 10:33:49.004675   38254 command_runner.go:130] > # insecure_registries = [
	I0916 10:33:49.004681   38254 command_runner.go:130] > # ]
	I0916 10:33:49.004687   38254 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:33:49.004693   38254 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:33:49.004697   38254 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:33:49.004705   38254 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:33:49.004709   38254 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:33:49.004715   38254 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:33:49.004720   38254 command_runner.go:130] > # CNI plugins.
	I0916 10:33:49.004723   38254 command_runner.go:130] > [crio.network]
	I0916 10:33:49.004731   38254 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:33:49.004737   38254 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:33:49.004743   38254 command_runner.go:130] > # cni_default_network = ""
	I0916 10:33:49.004748   38254 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:33:49.004754   38254 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:33:49.004760   38254 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:33:49.004766   38254 command_runner.go:130] > # plugin_dirs = [
	I0916 10:33:49.004769   38254 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:33:49.004773   38254 command_runner.go:130] > # ]
	I0916 10:33:49.004778   38254 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:33:49.004784   38254 command_runner.go:130] > [crio.metrics]
	I0916 10:33:49.004789   38254 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:33:49.004796   38254 command_runner.go:130] > # enable_metrics = false
	I0916 10:33:49.004801   38254 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:33:49.004808   38254 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:33:49.004814   38254 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:33:49.004820   38254 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:33:49.004826   38254 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:33:49.004832   38254 command_runner.go:130] > # metrics_collectors = [
	I0916 10:33:49.004835   38254 command_runner.go:130] > # 	"operations",
	I0916 10:33:49.004841   38254 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:33:49.004848   38254 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:33:49.004851   38254 command_runner.go:130] > # 	"operations_errors",
	I0916 10:33:49.004856   38254 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:33:49.004860   38254 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:33:49.004864   38254 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:33:49.004870   38254 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:33:49.004874   38254 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:33:49.004884   38254 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:33:49.004890   38254 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:33:49.004894   38254 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:33:49.004897   38254 command_runner.go:130] > # 	"containers_oom",
	I0916 10:33:49.004902   38254 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:33:49.004908   38254 command_runner.go:130] > # 	"operations_total",
	I0916 10:33:49.004912   38254 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:33:49.004917   38254 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:33:49.004921   38254 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:33:49.004927   38254 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:33:49.004931   38254 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:33:49.004937   38254 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:33:49.004941   38254 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:33:49.004945   38254 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:33:49.004949   38254 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:33:49.004952   38254 command_runner.go:130] > # ]
	I0916 10:33:49.004957   38254 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:33:49.004967   38254 command_runner.go:130] > # metrics_port = 9090
	I0916 10:33:49.004974   38254 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:33:49.004978   38254 command_runner.go:130] > # metrics_socket = ""
	I0916 10:33:49.004986   38254 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:33:49.004995   38254 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:33:49.005001   38254 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:33:49.005008   38254 command_runner.go:130] > # certificate on any modification event.
	I0916 10:33:49.005012   38254 command_runner.go:130] > # metrics_cert = ""
	I0916 10:33:49.005019   38254 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:33:49.005024   38254 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:33:49.005031   38254 command_runner.go:130] > # metrics_key = ""
	I0916 10:33:49.005036   38254 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:33:49.005042   38254 command_runner.go:130] > [crio.tracing]
	I0916 10:33:49.005048   38254 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:33:49.005054   38254 command_runner.go:130] > # enable_tracing = false
	I0916 10:33:49.005060   38254 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:33:49.005066   38254 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:33:49.005072   38254 command_runner.go:130] > # Number of samples to collect per million spans.
	I0916 10:33:49.005078   38254 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:33:49.005084   38254 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:33:49.005090   38254 command_runner.go:130] > [crio.stats]
	I0916 10:33:49.005095   38254 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:33:49.005103   38254 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:33:49.005107   38254 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:33:49.005165   38254 cni.go:84] Creating CNI manager for ""
	I0916 10:33:49.005174   38254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:33:49.005184   38254 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:33:49.005202   38254 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546931 NodeName:functional-546931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:33:49.005320   38254 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546931"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:33:49.005406   38254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:33:49.013742   38254 command_runner.go:130] > kubeadm
	I0916 10:33:49.013765   38254 command_runner.go:130] > kubectl
	I0916 10:33:49.013771   38254 command_runner.go:130] > kubelet
	I0916 10:33:49.013796   38254 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:33:49.013847   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:33:49.021757   38254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0916 10:33:49.038691   38254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:33:49.055067   38254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0916 10:33:49.071178   38254 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:33:49.074413   38254 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0916 10:33:49.074489   38254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:33:49.176315   38254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:33:49.186887   38254 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931 for IP: 192.168.49.2
	I0916 10:33:49.186909   38254 certs.go:194] generating shared ca certs ...
	I0916 10:33:49.186926   38254 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:33:49.187066   38254 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:33:49.187105   38254 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:33:49.187111   38254 certs.go:256] generating profile certs ...
	I0916 10:33:49.187181   38254 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key
	I0916 10:33:49.187236   38254 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key.94db7109
	I0916 10:33:49.187275   38254 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key
	I0916 10:33:49.187283   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:33:49.187294   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:33:49.187304   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:33:49.187316   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:33:49.187329   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:33:49.187342   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:33:49.187356   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:33:49.187368   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:33:49.187416   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:33:49.187443   38254 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:33:49.187452   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:33:49.187475   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:33:49.187496   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:33:49.187517   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:33:49.187556   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:33:49.187579   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.187589   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.187599   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.188132   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:33:49.210555   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:33:49.232164   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:33:49.253429   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:33:49.274719   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:33:49.295960   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:33:49.317488   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:33:49.338688   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:33:49.360466   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:33:49.382811   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:33:49.405854   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:33:49.427060   38254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:33:49.443019   38254 ssh_runner.go:195] Run: openssl version
	I0916 10:33:49.447634   38254 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:33:49.447868   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:33:49.456226   38254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.459381   38254 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.459405   38254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.459438   38254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.465459   38254 command_runner.go:130] > 3ec20f2e
	I0916 10:33:49.465663   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:33:49.473825   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:33:49.482264   38254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.485248   38254 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.485278   38254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.485320   38254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.491217   38254 command_runner.go:130] > b5213941
	I0916 10:33:49.491418   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:33:49.499104   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:33:49.507482   38254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.510649   38254 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.510706   38254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.510753   38254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.516916   38254 command_runner.go:130] > 51391683
	I0916 10:33:49.517148   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:33:49.525079   38254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:33:49.528120   38254 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:33:49.528141   38254 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:33:49.528150   38254 command_runner.go:130] > Device: 801h/2049d	Inode: 845407      Links: 1
	I0916 10:33:49.528159   38254 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:33:49.528168   38254 command_runner.go:130] > Access: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528175   38254 command_runner.go:130] > Modify: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528185   38254 command_runner.go:130] > Change: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528197   38254 command_runner.go:130] >  Birth: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528251   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:33:49.534274   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.534327   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:33:49.540413   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.540482   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:33:49.546205   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.546462   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:33:49.552870   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.552926   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:33:49.559026   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.559247   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:33:49.565244   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.565437   38254 kubeadm.go:392] StartCluster: {Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:33:49.565522   38254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:33:49.565578   38254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:33:49.596726   38254 command_runner.go:130] > 046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b
	I0916 10:33:49.596751   38254 command_runner.go:130] > 3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0
	I0916 10:33:49.596760   38254 command_runner.go:130] > fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d
	I0916 10:33:49.596771   38254 command_runner.go:130] > af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02
	I0916 10:33:49.596780   38254 command_runner.go:130] > 162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02
	I0916 10:33:49.596789   38254 command_runner.go:130] > f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb
	I0916 10:33:49.596798   38254 command_runner.go:130] > 75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534
	I0916 10:33:49.596812   38254 command_runner.go:130] > 9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81
	I0916 10:33:49.598752   38254 cri.go:89] found id: "046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b"
	I0916 10:33:49.598773   38254 cri.go:89] found id: "3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0"
	I0916 10:33:49.598779   38254 cri.go:89] found id: "fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d"
	I0916 10:33:49.598784   38254 cri.go:89] found id: "af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02"
	I0916 10:33:49.598787   38254 cri.go:89] found id: "162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02"
	I0916 10:33:49.598791   38254 cri.go:89] found id: "f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb"
	I0916 10:33:49.598793   38254 cri.go:89] found id: "75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534"
	I0916 10:33:49.598796   38254 cri.go:89] found id: "9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81"
	I0916 10:33:49.598803   38254 cri.go:89] found id: ""
	I0916 10:33:49.598853   38254 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 10:33:49.617772   38254 command_runner.go:130] > [{"ociVersion":"1.0.2-dev","id":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b/userdata","rootfs":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","created":"2024-09-16T10:33:38.617357367Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernete
s.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.592037792Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","i
o.kubernetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-wjzzx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-wjzzx_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a8423288f91be1a84a4da521d6ae34bd864cd162a94fbed9d42a73771704123e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a8423288f91be1a84a4da521d6ae34bd
864cd162a94fbed9d42a73771704123e","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/containers/coredns/bf4a0824\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccou
nt\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~projected/kube-api-access-6nbq8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-wjzzx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","kubernetes.io/config.seen":"2024-09-16T10:33:38.232398573Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02/userdata","rootfs":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","created":"2024-09-16T10:33:16.913425992Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79"
,"io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.872185655Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f
3135b30aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c02f70efafdd9ad1683640c8d3761d1d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-546931_c02f70efafdd9ad1683640c8d3761d1d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/878410a4a3694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"878410a4a3
694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/containers/kube-controller-manager/40c5a971\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propag
ation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-546931","io.kubernetes.pod.namespace":"kube-sy
stem","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.hash":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.seen":"2024-09-16T10:33:16.360793733Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0/userdata","rootfs":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","created":"2024-09-16T10:33:38.599775677Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.
kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.574503795Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7e94614-5
67e-47ba-a51a-426f09198dba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a7e94614-567e-47ba-a51a-426f09198dba/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TT
Y":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/containers/storage-provisioner/a6e61f0b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/volumes/kubernetes.io~projected/kube-api-access-2sn2d\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7e94614-567e-47ba-a51a
-426f09198dba","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-09-16T10:33:38.233440095Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75
f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534/userdata","rootfs":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","created":"2024-09-16T10:33:16.898797659Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","io.kubernetes.cri-o.ContainerType":"container","io.kubern
etes.cri-o.Created":"2024-09-16T10:33:16.857413084Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"adb8a765a0d6f587897c42f69e87ac66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546931_adb8a765a0d6f587897c42f69e87ac66/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-546931_kube-system_ad
b8a765a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546931_kube-system_adb8a765a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/containers/kube-scheduler/744b9614\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"
/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:16.360795477Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81/userdata","rootfs":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","created":"2024-09-16T10:33:16.900739801Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.ha
sh":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.85647171Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aae
a29d1aee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eb02afa85fe4b42d87b2f90fa03a9ee4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-546931_eb02afa85fe4b42d87b2f90fa03a9ee4/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a","io.kubernet
es.cri-o.SandboxName":"k8s_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/containers/kube-apiserver/66d438ec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/et
c/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.seen":"2024-09-16T10:33:16.360791837Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02","pid":0,"status
":"stopped","bundle":"/run/containers/storage/overlay-containers/af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02/userdata","rootfs":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","created":"2024-09-16T10:33:27.512878128Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e80daca3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e80daca3\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368
ed02","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.418309123Z","io.kubernetes.cri-o.Image":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri-o.ImageRef":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-6dtx8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"44bb424a-c279-467b-9256-64be125798f9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-6dtx8_44bb424a-c279-467b-9256-64be125798f9/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-6dtx8_kube
-system_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-6dtx8_kube-system_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/etc-hosts\",\"readonly\":false,\"propag
ation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/containers/kindnet-cni/72735cde\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/volumes/kubernetes.io~projected/kube-api-access-pvmbd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-6dtx8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"44bb424a-c279-467b-9256-64be125798f9","kubernetes.io/config.seen":"2024-09-16T10:33:27.017005789Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f2b587ead9ac67a13360a9d4e
64d8162b8e8a689647afbe35780436d360a37eb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb/userdata","rootfs":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","created":"2024-09-16T10:33:16.907949976Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f2b587ead9a
c67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.862227247Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4f74e884ad630d68b59e0dbdb6055584\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546931_4f74e884ad630d68b59e0dbdb6055584/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etc
d-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/containers/etcd/233a07f1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"conta
iner_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4f74e884ad630d68b59e0dbdb6055584","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.seen":"2024-09-16T10:33:16.360785708Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d/userdata","ro
otfs":"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","created":"2024-09-16T10:33:27.53460221Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.498124321Z","io.kubernetes.cri-o.Image":
"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-kshs9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-kshs9_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f14f9778290afbd7383f2dd12e
e1f50b74d62f40bf11ae42d2fd8c4a441931e1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f14f9778290afbd7383f2dd12ee1f50b74d62f40bf11ae42d2fd8c4a441931e1","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e0
19b2687b/containers/kube-proxy/1af07bf5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~projected/kube-api-access-j6b95\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-kshs9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b","kubernetes.io/config.seen":"2024-09-16T10:33:27.024180818Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0916 10:33:49.617849   38254 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b/userdata","rootfs":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","created":"2024-09-16T10:33:38.617357367Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-
o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.592037792Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","io.kube
rnetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-wjzzx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-wjzzx_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a8423288f91be1a84a4da521d6ae34bd864cd162a94fbed9d42a73771704123e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a8423288f91be1a84a4da521d6ae34bd864cd1
62a94fbed9d42a73771704123e","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/containers/coredns/bf4a0824\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\
"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~projected/kube-api-access-6nbq8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-wjzzx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","kubernetes.io/config.seen":"2024-09-16T10:33:38.232398573Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02/userdata","rootfs":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","created":"2024-09-16T10:33:16.913425992Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79","io.k
ubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.872185655Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b3
0aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c02f70efafdd9ad1683640c8d3761d1d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-546931_c02f70efafdd9ad1683640c8d3761d1d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/878410a4a3694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"878410a4a3694fdf
2132194e1285396dab571b39a68ea3dbdc0049350911800d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/containers/kube-controller-manager/40c5a971\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-546931","io.kubernetes.pod.namespace":"kube-system",
"io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.hash":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.seen":"2024-09-16T10:33:16.360793733Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0/userdata","rootfs":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","created":"2024-09-16T10:33:38.599775677Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubern
etes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.574503795Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7e94614-567e-47
ba-a51a-426f09198dba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a7e94614-567e-47ba-a51a-426f09198dba/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"fa
lse","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/containers/storage-provisioner/a6e61f0b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/volumes/kubernetes.io~projected/kube-api-access-2sn2d\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7e94614-567e-47ba-a51a-426f0
9198dba","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-09-16T10:33:38.233440095Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75f3c106
06812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534/userdata","rootfs":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","created":"2024-09-16T10:33:16.898797659Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.c
ri-o.Created":"2024-09-16T10:33:16.857413084Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"adb8a765a0d6f587897c42f69e87ac66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546931_adb8a765a0d6f587897c42f69e87ac66/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-546931_kube-system_adb8a765
a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546931_kube-system_adb8a765a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/containers/kube-scheduler/744b9614\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/k
ubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:16.360795477Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81/userdata","rootfs":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","created":"2024-09-16T10:33:16.900739801Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7
df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.85647171Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1a
ee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eb02afa85fe4b42d87b2f90fa03a9ee4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-546931_eb02afa85fe4b42d87b2f90fa03a9ee4/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a","io.kubernetes.cri
-o.SandboxName":"k8s_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/containers/kube-apiserver/66d438ec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/
certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.seen":"2024-09-16T10:33:16.360791837Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02","pid":0,"status":"sto
pped","bundle":"/run/containers/storage/overlay-containers/af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02/userdata","rootfs":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","created":"2024-09-16T10:33:27.512878128Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e80daca3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e80daca3\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02",
"io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.418309123Z","io.kubernetes.cri-o.Image":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri-o.ImageRef":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-6dtx8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"44bb424a-c279-467b-9256-64be125798f9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-6dtx8_44bb424a-c279-467b-9256-64be125798f9/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-6dtx8_kube-syste
m_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-6dtx8_kube-system_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/etc-hosts\",\"readonly\":false,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/containers/kindnet-cni/72735cde\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/volumes/kubernetes.io~projected/kube-api-access-pvmbd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-6dtx8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"44bb424a-c279-467b-9256-64be125798f9","kubernetes.io/config.seen":"2024-09-16T10:33:27.017005789Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f2b587ead9ac67a13360a9d4e64d816
2b8e8a689647afbe35780436d360a37eb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb/userdata","rootfs":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","created":"2024-09-16T10:33:16.907949976Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f2b587ead9ac67a13
360a9d4e64d8162b8e8a689647afbe35780436d360a37eb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.862227247Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4f74e884ad630d68b59e0dbdb6055584\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546931_4f74e884ad630d68b59e0dbdb6055584/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-func
tional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/containers/etcd/233a07f1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_p
ath\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4f74e884ad630d68b59e0dbdb6055584","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.seen":"2024-09-16T10:33:16.360785708Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d/userdata","rootfs":
"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","created":"2024-09-16T10:33:27.53460221Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.498124321Z","io.kubernetes.cri-o.Image":"60c00
5f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-kshs9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-kshs9_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f14f9778290afbd7383f2dd12ee1f50b
74d62f40bf11ae42d2fd8c4a441931e1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f14f9778290afbd7383f2dd12ee1f50b74d62f40bf11ae42d2fd8c4a441931e1","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b268
7b/containers/kube-proxy/1af07bf5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~projected/kube-api-access-j6b95\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-kshs9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b","kubernetes.io/config.seen":"2024-09-16T10:33:27.024180818Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0916 10:33:49.618206   38254 cri.go:126] list returned 8 containers
	I0916 10:33:49.618216   38254 cri.go:129] container: {ID:046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b Status:stopped}
	I0916 10:33:49.618229   38254 cri.go:135] skipping {046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618239   38254 cri.go:129] container: {ID:162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02 Status:stopped}
	I0916 10:33:49.618244   38254 cri.go:135] skipping {162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618248   38254 cri.go:129] container: {ID:3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0 Status:stopped}
	I0916 10:33:49.618253   38254 cri.go:135] skipping {3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618256   38254 cri.go:129] container: {ID:75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534 Status:stopped}
	I0916 10:33:49.618260   38254 cri.go:135] skipping {75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618265   38254 cri.go:129] container: {ID:9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81 Status:stopped}
	I0916 10:33:49.618269   38254 cri.go:135] skipping {9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618272   38254 cri.go:129] container: {ID:af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02 Status:stopped}
	I0916 10:33:49.618275   38254 cri.go:135] skipping {af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618281   38254 cri.go:129] container: {ID:f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb Status:stopped}
	I0916 10:33:49.618284   38254 cri.go:135] skipping {f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618290   38254 cri.go:129] container: {ID:fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d Status:stopped}
	I0916 10:33:49.618293   38254 cri.go:135] skipping {fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618334   38254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:33:49.625627   38254 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 10:33:49.625651   38254 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 10:33:49.625657   38254 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 10:33:49.625660   38254 command_runner.go:130] > member
	I0916 10:33:49.626315   38254 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:33:49.626332   38254 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:33:49.626386   38254 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:33:49.633963   38254 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:33:49.634429   38254 kubeconfig.go:125] found "functional-546931" server: "https://192.168.49.2:8441"
	I0916 10:33:49.634791   38254 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:49.634995   38254 kapi.go:59] client config for functional-546931: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:33:49.635364   38254 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:33:49.635523   38254 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:33:49.643129   38254 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:33:49.643158   38254 kubeadm.go:597] duration metric: took 16.81941ms to restartPrimaryControlPlane
	I0916 10:33:49.643169   38254 kubeadm.go:394] duration metric: took 77.739557ms to StartCluster
	I0916 10:33:49.643190   38254 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:33:49.643256   38254 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:49.643780   38254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:33:49.643985   38254 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:33:49.644050   38254 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:33:49.644203   38254 addons.go:69] Setting storage-provisioner=true in profile "functional-546931"
	I0916 10:33:49.644226   38254 addons.go:234] Setting addon storage-provisioner=true in "functional-546931"
	W0916 10:33:49.644235   38254 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:33:49.644181   38254 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:33:49.644268   38254 host.go:66] Checking if "functional-546931" exists ...
	I0916 10:33:49.644278   38254 addons.go:69] Setting default-storageclass=true in profile "functional-546931"
	I0916 10:33:49.644298   38254 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-546931"
	I0916 10:33:49.644589   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:49.644653   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:49.646651   38254 out.go:177] * Verifying Kubernetes components...
	I0916 10:33:49.648003   38254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:33:49.663793   38254 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:49.664132   38254 kapi.go:59] client config for functional-546931: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:33:49.664453   38254 addons.go:234] Setting addon default-storageclass=true in "functional-546931"
	W0916 10:33:49.664470   38254 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:33:49.664493   38254 host.go:66] Checking if "functional-546931" exists ...
	I0916 10:33:49.664783   38254 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:33:49.664937   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:49.666385   38254 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:49.666402   38254 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:33:49.666441   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:49.682108   38254 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:49.682134   38254 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:33:49.682192   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:49.692860   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:49.705787   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:49.762902   38254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:33:49.773430   38254 node_ready.go:35] waiting up to 6m0s for node "functional-546931" to be "Ready" ...
	I0916 10:33:49.773561   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:49.773571   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:49.773582   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:49.773588   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:49.773815   38254 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:33:49.773834   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:49.802716   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:49.814043   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:49.857384   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:49.860509   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:49.860540   38254 retry.go:31] will retry after 300.245829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:49.869914   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:49.872729   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:49.872762   38254 retry.go:31] will retry after 238.748719ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.112285   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:50.161885   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:50.171454   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:50.177236   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.177268   38254 retry.go:31] will retry after 529.480717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.274595   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:50.274626   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:50.274638   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:50.274644   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:50.274973   38254 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:33:50.274992   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:50.315059   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:50.317990   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.318021   38254 retry.go:31] will retry after 305.983384ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.624430   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:50.707033   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:50.774228   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:50.774255   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:50.774263   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:50.774269   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:50.774569   38254 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:33:50.774585   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:51.274368   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:51.274392   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:51.274399   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:51.274405   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.038248   38254 round_trippers.go:574] Response Status: 200 OK in 1763 milliseconds
	I0916 10:33:53.038275   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.038284   38254 round_trippers.go:580]     Audit-Id: 1c642505-dccc-43a1-8ea3-320a97466b10
	I0916 10:33:53.038289   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.038294   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.038297   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:33:53.038301   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:33:53.038306   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.038412   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.039318   38254 node_ready.go:49] node "functional-546931" has status "Ready":"True"
	I0916 10:33:53.039341   38254 node_ready.go:38] duration metric: took 3.265875226s for node "functional-546931" to be "Ready" ...
	I0916 10:33:53.039354   38254 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:33:53.039406   38254 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:33:53.039420   38254 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:33:53.039489   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:33:53.039497   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.039507   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.039513   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.100254   38254 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0916 10:33:53.100282   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.100293   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:33:53.100299   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:33:53.100305   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.100309   38254 round_trippers.go:580]     Audit-Id: ae4b4bee-0fe7-4f86-9096-659df06d797e
	I0916 10:33:53.100316   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.100321   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.101912   38254 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"437","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59464 chars]
	I0916 10:33:53.107344   38254 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.107473   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-wjzzx
	I0916 10:33:53.107486   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.107498   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.107507   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.197526   38254 round_trippers.go:574] Response Status: 200 OK in 89 milliseconds
	I0916 10:33:53.197557   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.197567   38254 round_trippers.go:580]     Audit-Id: 651694ce-a38c-452e-9b01-11e9d57c8932
	I0916 10:33:53.197573   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.197577   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.197581   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:33:53.197587   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:33:53.197592   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.197753   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"437","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6814 chars]
	I0916 10:33:53.198405   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.198429   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.198440   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.198449   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.203490   38254 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:33:53.203517   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.203527   38254 round_trippers.go:580]     Audit-Id: 7b802d09-8564-4668-abc2-0b4162246b03
	I0916 10:33:53.203535   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.203542   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.203545   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.203549   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.203568   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.203703   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.204288   38254 pod_ready.go:93] pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.204341   38254 pod_ready.go:82] duration metric: took 96.956266ms for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.204382   38254 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.204515   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-546931
	I0916 10:33:53.204546   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.204584   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.204596   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.208347   38254 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:33:53.208421   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.208436   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.208440   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.208445   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.208450   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.208453   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.208458   38254 round_trippers.go:580]     Audit-Id: 3d330fd4-a8c9-4e1d-af79-01d37292c22a
	I0916 10:33:53.208704   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-546931","namespace":"kube-system","uid":"7fe96e5a-6112-4e96-981b-b15be906fa34","resourceVersion":"408","creationTimestamp":"2024-09-16T10:33:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.mirror":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.seen":"2024-09-16T10:33:16.360785708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6440 chars]
	I0916 10:33:53.209277   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.209312   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.209326   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.209348   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.214187   38254 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:33:53.214217   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.214225   38254 round_trippers.go:580]     Audit-Id: ace7b2e4-42fa-44aa-a282-895c07bcbc84
	I0916 10:33:53.214231   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.214235   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.214239   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.214245   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.214250   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.214765   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.215209   38254 pod_ready.go:93] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.215239   38254 pod_ready.go:82] duration metric: took 10.839142ms for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.215259   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.215351   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-546931
	I0916 10:33:53.215365   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.215378   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.215392   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.294408   38254 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I0916 10:33:53.294434   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.294443   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.294450   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.294453   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.294457   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.294461   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.294466   38254 round_trippers.go:580]     Audit-Id: 03574138-3282-4aa3-aa83-814665603454
	I0916 10:33:53.294680   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-546931","namespace":"kube-system","uid":"19d3920d-b342-4764-b722-116797db07ca","resourceVersion":"414","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.mirror":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.seen":"2024-09-16T10:33:22.023551772Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8516 chars]
	I0916 10:33:53.295290   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.295318   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.295328   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.295334   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.303774   38254 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:33:53.303799   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.303850   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.303856   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.303862   38254 round_trippers.go:580]     Audit-Id: 96ad4e16-5211-4b10-90c9-83d766b93e24
	I0916 10:33:53.303866   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.303870   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.303874   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.304059   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.304531   38254 pod_ready.go:93] pod "kube-apiserver-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.304564   38254 pod_ready.go:82] duration metric: took 89.294956ms for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.304601   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.304715   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-546931
	I0916 10:33:53.304736   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.304762   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.304776   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.307514   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.307538   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.307548   38254 round_trippers.go:580]     Audit-Id: cda7b98e-3acf-4b93-aa0a-6aa95829a2e4
	I0916 10:33:53.307554   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.307561   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.307565   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.307571   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.307575   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.307703   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-546931","namespace":"kube-system","uid":"49789d64-6fd1-441c-b9e0-470a0832d127","resourceVersion":"416","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.mirror":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.seen":"2024-09-16T10:33:22.023553611Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8091 chars]
	I0916 10:33:53.308331   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.308349   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.308360   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.308366   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.311139   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.311160   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.311172   38254 round_trippers.go:580]     Audit-Id: d752729e-0858-4d4e-9528-9d2d7e158372
	I0916 10:33:53.311178   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.311184   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.311189   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.311192   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.311197   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.311355   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.311763   38254 pod_ready.go:93] pod "kube-controller-manager-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.311785   38254 pod_ready.go:82] duration metric: took 7.161521ms for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.311796   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.311855   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kshs9
	I0916 10:33:53.311859   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.311866   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.311919   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.314064   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.314083   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.314092   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.314096   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.314102   38254 round_trippers.go:580]     Audit-Id: 9ef53038-6005-430b-a333-55401be5c3b3
	I0916 10:33:53.314105   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.314110   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.314113   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.314244   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kshs9","generateName":"kube-proxy-","namespace":"kube-system","uid":"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b","resourceVersion":"402","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86c1ab56-d49f-4f2c-8253-0494b746de56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86c1ab56-d49f-4f2c-8253-0494b746de56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6172 chars]
	I0916 10:33:53.314768   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.314787   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.314798   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.314807   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.318553   38254 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:33:53.318600   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.318622   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.318632   38254 round_trippers.go:580]     Audit-Id: 304083ef-0660-4c80-b607-7b0d2afbeabc
	I0916 10:33:53.318637   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.318641   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.318645   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.318650   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.319108   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.319589   38254 pod_ready.go:93] pod "kube-proxy-kshs9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.319619   38254 pod_ready.go:82] duration metric: took 7.815518ms for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.319632   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.439973   38254 request.go:632] Waited for 120.260508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:53.440058   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:53.440099   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.440120   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.440136   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.442326   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.442344   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.442353   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.442358   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.442363   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.442367   38254 round_trippers.go:580]     Audit-Id: ed8cfd9f-3256-4ec1-a34f-874048e03f2d
	I0916 10:33:53.442371   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.442375   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.442513   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:53.640393   38254 request.go:632] Waited for 197.36129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.640448   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.640453   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.640459   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.640463   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.642389   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:53.642414   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.642424   38254 round_trippers.go:580]     Audit-Id: 0757a158-ca2f-47f0-ba87-6a82d0a5c7e6
	I0916 10:33:53.642429   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.642433   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.642438   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.642442   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.642448   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.642584   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.840548   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:53.840577   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.840599   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.840608   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.843027   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.843056   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.843067   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.843073   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.843077   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.843082   38254 round_trippers.go:580]     Audit-Id: 27427865-937d-4133-830d-1adc18e56eda
	I0916 10:33:53.843087   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.843090   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.843732   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:53.955357   38254 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0916 10:33:53.955390   38254 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0916 10:33:53.955402   38254 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:33:53.955415   38254 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:33:53.955423   38254 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0916 10:33:53.955435   38254 command_runner.go:130] > pod/storage-provisioner configured
	I0916 10:33:53.955464   38254 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.331007744s)
	I0916 10:33:53.955500   38254 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0916 10:33:53.955548   38254 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.248481765s)
	I0916 10:33:53.955697   38254 round_trippers.go:463] GET https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses
	I0916 10:33:53.955710   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.955720   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.955726   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.958346   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.958370   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.958378   38254 round_trippers.go:580]     Audit-Id: c6bea1e7-9038-41e9-be20-1f68f1bcf84c
	I0916 10:33:53.958381   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.958385   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.958388   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.958391   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.958395   38254 round_trippers.go:580]     Content-Length: 1273
	I0916 10:33:53.958397   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.958427   38254 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"463"},"items":[{"metadata":{"name":"standard","uid":"7dc87164-1259-473b-bcbc-5a709a2c0af0","resourceVersion":"377","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:33:53.958853   38254 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"7dc87164-1259-473b-bcbc-5a709a2c0af0","resourceVersion":"377","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:33:53.958910   38254 round_trippers.go:463] PUT https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:33:53.958917   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.958924   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.958929   38254 round_trippers.go:473]     Content-Type: application/json
	I0916 10:33:53.958935   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.962008   38254 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:33:53.962029   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.962036   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.962041   38254 round_trippers.go:580]     Audit-Id: 85cec3d1-2992-4d80-ae3a-330f67f88a6b
	I0916 10:33:53.962044   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.962054   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.962058   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.962060   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.962064   38254 round_trippers.go:580]     Content-Length: 1220
	I0916 10:33:53.962109   38254 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"7dc87164-1259-473b-bcbc-5a709a2c0af0","resourceVersion":"377","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:33:53.965209   38254 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:33:53.966605   38254 addons.go:510] duration metric: took 4.322556117s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:33:54.040278   38254 request.go:632] Waited for 195.856487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.040342   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.040353   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.040364   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.040371   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.042055   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:54.042077   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.042086   38254 round_trippers.go:580]     Audit-Id: c5198a6f-5bb8-4a26-b48c-26bca8116a3e
	I0916 10:33:54.042091   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.042097   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.042101   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.042105   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.042109   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.042225   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:54.320703   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:54.320728   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.320736   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.320741   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.322904   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.322928   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.322936   38254 round_trippers.go:580]     Audit-Id: 5de2991c-0858-4a3e-9a47-e142f64addac
	I0916 10:33:54.322942   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.322946   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.322950   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.322954   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.322958   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.323122   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:54.439811   38254 request.go:632] Waited for 116.306704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.439892   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.439897   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.439907   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.439911   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.441950   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.441968   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.441975   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.441979   38254 round_trippers.go:580]     Audit-Id: a2764685-a036-4033-b994-bbe592950d2d
	I0916 10:33:54.441983   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.441987   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.441990   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.441993   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.442185   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:54.820734   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:54.820760   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.820769   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.820774   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.823033   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.823059   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.823068   38254 round_trippers.go:580]     Audit-Id: f97efdb6-d58f-4112-a0d4-badbca5fc43f
	I0916 10:33:54.823075   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.823080   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.823085   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.823091   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.823095   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.823237   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:54.839893   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.839942   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.839954   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.839960   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.842167   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.842193   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.842200   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.842205   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.842207   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.842210   38254 round_trippers.go:580]     Audit-Id: ad1e3030-3d88-4bcf-82c6-c28d45be3788
	I0916 10:33:54.842212   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.842216   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.842429   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:55.320044   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:55.320068   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.320076   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.320081   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.322332   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:55.322356   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.322366   38254 round_trippers.go:580]     Audit-Id: b1f7491a-b7ff-4038-ad39-51ecaa970ccc
	I0916 10:33:55.322370   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.322372   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.322376   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.322380   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.322383   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.322551   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:55.322974   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:55.322990   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.323002   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.323008   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.324737   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:55.324751   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.324758   38254 round_trippers.go:580]     Audit-Id: b9c5e412-285c-43f1-bd9e-49afd075300e
	I0916 10:33:55.324765   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.324771   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.324775   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.324780   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.324792   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.324970   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:55.325280   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:33:55.820788   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:55.820811   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.820819   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.820822   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.823202   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:55.823223   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.823230   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.823234   38254 round_trippers.go:580]     Audit-Id: 93ea65b0-3b2d-45f0-8d9f-7fd6d377658f
	I0916 10:33:55.823238   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.823242   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.823245   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.823247   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.823463   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:55.823905   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:55.823922   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.823932   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.823937   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.825728   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:55.825742   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.825753   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.825756   38254 round_trippers.go:580]     Audit-Id: a9596991-ca14-4d5d-badc-32c5aa86bc01
	I0916 10:33:55.825760   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.825762   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.825765   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.825768   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.825951   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:56.320626   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:56.320662   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.320673   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.320677   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.322944   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:56.322961   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.322968   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.322972   38254 round_trippers.go:580]     Audit-Id: c2f282b4-169c-48e5-b6d4-5177c73ef827
	I0916 10:33:56.322975   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.322978   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.322980   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.322983   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.323162   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:56.323542   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:56.323554   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.323561   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.323564   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.325237   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:56.325256   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.325266   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.325276   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.325281   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.325286   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.325294   38254 round_trippers.go:580]     Audit-Id: 3c2420df-2485-44c9-8205-15e3f735028c
	I0916 10:33:56.325298   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.325465   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:56.820054   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:56.820080   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.820087   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.820091   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.822399   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:56.822423   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.822433   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.822437   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.822443   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.822449   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.822453   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.822459   38254 round_trippers.go:580]     Audit-Id: fa6c8bc1-f577-440c-8488-5aec0e46477f
	I0916 10:33:56.822567   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:56.822974   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:56.822988   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.822995   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.823000   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.824585   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:56.824603   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.824612   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.824620   38254 round_trippers.go:580]     Audit-Id: 007c0986-fb46-469e-896a-3f9e05879f5c
	I0916 10:33:56.824628   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.824632   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.824638   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.824645   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.824792   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:57.320450   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:57.320478   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.320486   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.320489   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.322520   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:57.322541   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.322547   38254 round_trippers.go:580]     Audit-Id: ee77ef11-fb1f-4703-afa5-559d02e420ba
	I0916 10:33:57.322551   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.322554   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.322558   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.322564   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.322568   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.322748   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:57.323134   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:57.323148   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.323155   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.323158   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.324904   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:57.324921   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.324929   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.324934   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.324939   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.324943   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.324947   38254 round_trippers.go:580]     Audit-Id: bf45afbc-2dde-45ac-83b6-34cd3e87137a
	I0916 10:33:57.324951   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.325111   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:57.325474   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:33:57.820815   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:57.820836   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.820844   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.820847   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.823010   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:57.823033   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.823043   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.823054   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.823058   38254 round_trippers.go:580]     Audit-Id: 8b9870de-9f9d-4f26-a6dd-3c15a4e1cd62
	I0916 10:33:57.823062   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.823066   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.823072   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.823231   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:57.823655   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:57.823671   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.823681   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.823689   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.825433   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:57.825471   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.825482   38254 round_trippers.go:580]     Audit-Id: b556fffd-a085-434b-8294-e3e7380d7f2e
	I0916 10:33:57.825490   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.825497   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.825503   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.825513   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.825519   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.825681   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:58.320335   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:58.320363   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.320371   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.320376   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.322583   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:58.322602   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.322608   38254 round_trippers.go:580]     Audit-Id: 44b346cb-cc00-492c-b32a-a62119404892
	I0916 10:33:58.322613   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.322617   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.322620   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.322623   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.322626   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.322753   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:58.323128   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:58.323141   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.323147   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.323152   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.324775   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:58.324790   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.324796   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.324800   38254 round_trippers.go:580]     Audit-Id: ecc5a8f8-b95d-41df-a751-9244ce0fee39
	I0916 10:33:58.324803   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.324806   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.324809   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.324812   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.324945   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:58.820734   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:58.820761   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.820770   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.820778   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.823145   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:58.823167   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.823175   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.823180   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.823183   38254 round_trippers.go:580]     Audit-Id: a2364f0a-ebbb-497c-b763-1afadf9035e2
	I0916 10:33:58.823188   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.823190   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.823193   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.823316   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:58.823716   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:58.823730   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.823736   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.823740   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.825726   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:58.825754   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.825762   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.825768   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.825778   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.825784   38254 round_trippers.go:580]     Audit-Id: 2d702b31-9359-4fa6-9411-2fd783e8dd5e
	I0916 10:33:58.825789   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.825794   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.825932   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:59.320581   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:59.320607   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.320616   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.320621   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.322982   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:59.323010   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.323021   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.323026   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.323031   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.323034   38254 round_trippers.go:580]     Audit-Id: f3054f05-d6c1-4d2f-a614-b395364e5bb8
	I0916 10:33:59.323039   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.323043   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.323164   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:59.323579   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:59.323596   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.323603   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.323607   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.325421   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:59.325453   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.325464   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.325470   38254 round_trippers.go:580]     Audit-Id: 2fd4d607-a47e-41b7-8a45-7001a5e948e4
	I0916 10:33:59.325476   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.325481   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.325489   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.325494   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.325648   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:59.325944   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:33:59.820645   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:59.820674   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.820686   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.820692   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.823119   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:59.823146   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.823154   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.823161   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.823165   38254 round_trippers.go:580]     Audit-Id: d15a0dd8-62a2-42c6-baf2-901ac12065e8
	I0916 10:33:59.823171   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.823176   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.823180   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.823287   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:59.823773   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:59.823789   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.823800   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.823806   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.825642   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:59.825664   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.825674   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.825678   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.825685   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.825689   38254 round_trippers.go:580]     Audit-Id: 60a43eac-08d9-4e91-99b1-541773d1eca7
	I0916 10:33:59.825693   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.825698   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.825874   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:00.320369   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:00.320398   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.320408   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.320413   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.322249   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:00.322270   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.322279   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.322285   38254 round_trippers.go:580]     Audit-Id: 666ab14f-79c0-4af6-afb7-c2f52d3e5ecd
	I0916 10:34:00.322290   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.322296   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.322301   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.322305   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.322412   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:00.322907   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:00.322927   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.322938   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.322945   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.324586   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:00.324611   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.324620   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.324625   38254 round_trippers.go:580]     Audit-Id: ed628808-6128-4c07-bf36-e134c049106d
	I0916 10:34:00.324630   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.324637   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.324640   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.324648   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.324874   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:00.820334   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:00.820358   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.820366   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.820370   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.822843   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:00.822870   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.822883   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.822890   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.822895   38254 round_trippers.go:580]     Audit-Id: 93d3ba47-68b9-4337-b09d-edba2937ed08
	I0916 10:34:00.822900   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.822905   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.822910   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.823078   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:00.823574   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:00.823593   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.823604   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.823611   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.825643   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:00.825662   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.825670   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.825676   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.825681   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.825685   38254 round_trippers.go:580]     Audit-Id: 2136686e-a753-4e57-b719-93a1b7b6c12c
	I0916 10:34:00.825689   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.825694   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.825866   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:01.320542   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:01.320566   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.320574   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.320578   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.323004   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:01.323027   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.323038   38254 round_trippers.go:580]     Audit-Id: b8df7cd4-15dc-46b9-8c1c-0ae4633ffde3
	I0916 10:34:01.323043   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.323047   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.323051   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.323055   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.323059   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.323146   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:01.323527   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:01.323541   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.323550   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.323555   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.325649   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:01.325669   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.325675   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.325678   38254 round_trippers.go:580]     Audit-Id: 09891e39-5d40-4516-9eda-852bef0ec59d
	I0916 10:34:01.325681   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.325684   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.325687   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.325690   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.325862   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:01.326191   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:34:01.820611   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:01.820638   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.820649   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.820654   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.823173   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:01.823194   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.823201   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.823205   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.823208   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.823211   38254 round_trippers.go:580]     Audit-Id: 23913692-996c-43c8-805e-f70780f0630d
	I0916 10:34:01.823214   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.823216   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.823370   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:01.823803   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:01.823818   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.823825   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.823828   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.825788   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:01.825811   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.825821   38254 round_trippers.go:580]     Audit-Id: 079e1a21-5253-4f0c-b187-bf832d122510
	I0916 10:34:01.825826   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.825833   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.825835   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.825838   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.825841   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.825965   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:02.320721   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:02.320748   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.320756   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.320760   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.322968   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:02.322994   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.323001   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.323006   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.323010   38254 round_trippers.go:580]     Audit-Id: 33d8ee38-f4ba-4044-821a-c1a98fc88f52
	I0916 10:34:02.323013   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.323016   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.323019   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.323169   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:02.323569   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:02.323582   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.323588   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.323596   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.325386   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:02.325408   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.325418   38254 round_trippers.go:580]     Audit-Id: 149dee04-a1d1-4a2c-9543-448d002743c1
	I0916 10:34:02.325426   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.325431   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.325437   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.325463   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.325472   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.325653   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:02.820273   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:02.820302   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.820310   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.820314   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.822782   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:02.822807   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.822815   38254 round_trippers.go:580]     Audit-Id: b8c02830-bf20-4086-a35b-5ddf99e664ff
	I0916 10:34:02.822821   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.822826   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.822829   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.822832   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.822836   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.822938   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:02.823337   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:02.823351   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.823358   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.823363   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.825262   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:02.825281   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.825290   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.825296   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.825301   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.825307   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.825311   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.825316   38254 round_trippers.go:580]     Audit-Id: 0d03e038-f5cd-4017-b5c9-4dd1a324073d
	I0916 10:34:02.825520   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:03.320100   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:03.320127   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.320135   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.320140   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.322583   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.322613   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.322622   38254 round_trippers.go:580]     Audit-Id: 13854a33-5ffa-49ec-bd4b-388773c01dd5
	I0916 10:34:03.322629   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.322632   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.322635   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.322638   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.322642   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.322822   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:03.323193   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:03.323205   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.323212   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.323217   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.324947   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:03.324963   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.324970   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.324976   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.324980   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.324983   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.324986   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.324989   38254 round_trippers.go:580]     Audit-Id: f270afe4-49bc-466e-a752-d87f3eea1493
	I0916 10:34:03.325105   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:03.820870   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:03.820895   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.820903   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.820907   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.823280   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.823307   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.823318   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.823325   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.823328   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.823332   38254 round_trippers.go:580]     Audit-Id: d27beddf-0127-4261-bf41-c13ee88100e5
	I0916 10:34:03.823335   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.823338   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.823493   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"533","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5177 chars]
	I0916 10:34:03.823902   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:03.823918   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.823924   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.823928   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.825921   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:03.825945   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.825954   38254 round_trippers.go:580]     Audit-Id: 725f2c59-296c-4129-b350-9f76c3e0f784
	I0916 10:34:03.825960   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.825965   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.825969   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.825973   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.825977   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.826082   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:03.826387   38254 pod_ready.go:93] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:03.826404   38254 pod_ready.go:82] duration metric: took 10.506765676s for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:03.826415   38254 pod_ready.go:39] duration metric: took 10.787048666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:34:03.826433   38254 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:34:03.826480   38254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:34:03.836807   38254 command_runner.go:130] > 3244
	I0916 10:34:03.837712   38254 api_server.go:72] duration metric: took 14.193700208s to wait for apiserver process to appear ...
	I0916 10:34:03.837741   38254 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:34:03.837769   38254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:03.842554   38254 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:34:03.842659   38254 round_trippers.go:463] GET https://192.168.49.2:8441/version
	I0916 10:34:03.842668   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.842676   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.842682   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.843506   38254 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:34:03.843527   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.843533   38254 round_trippers.go:580]     Audit-Id: 4fa131c1-349b-4955-8ff8-e9dd0a8409e7
	I0916 10:34:03.843537   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.843540   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.843543   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.843549   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.843553   38254 round_trippers.go:580]     Content-Length: 263
	I0916 10:34:03.843559   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.843578   38254 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:34:03.843692   38254 api_server.go:141] control plane version: v1.31.1
	I0916 10:34:03.843716   38254 api_server.go:131] duration metric: took 5.967207ms to wait for apiserver health ...
	I0916 10:34:03.843726   38254 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:34:03.843802   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:34:03.843812   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.843822   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.843832   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.846170   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.846194   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.846208   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.846214   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.846219   38254 round_trippers.go:580]     Audit-Id: 95c1ad71-fc2a-4a8b-8ff5-79003879fc7e
	I0916 10:34:03.846224   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.846231   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.846239   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.846721   38254 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"471","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61610 chars]
	I0916 10:34:03.848543   38254 system_pods.go:59] 8 kube-system pods found
	I0916 10:34:03.848585   38254 system_pods.go:61] "coredns-7c65d6cfc9-wjzzx" [2df1d14c-ae32-4b0d-b3fa-6cdcab40919a] Running
	I0916 10:34:03.848593   38254 system_pods.go:61] "etcd-functional-546931" [7fe96e5a-6112-4e96-981b-b15be906fa34] Running
	I0916 10:34:03.848598   38254 system_pods.go:61] "kindnet-6dtx8" [44bb424a-c279-467b-9256-64be125798f9] Running
	I0916 10:34:03.848605   38254 system_pods.go:61] "kube-apiserver-functional-546931" [19d3920d-b342-4764-b722-116797db07ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:34:03.848621   38254 system_pods.go:61] "kube-controller-manager-functional-546931" [49789d64-6fd1-441c-b9e0-470a0832d127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:34:03.848628   38254 system_pods.go:61] "kube-proxy-kshs9" [c2a1ef0a-22f5-4b04-a7fe-30e019b2687b] Running
	I0916 10:34:03.848632   38254 system_pods.go:61] "kube-scheduler-functional-546931" [40d727b8-b05b-40b1-9837-87741459ef16] Running
	I0916 10:34:03.848638   38254 system_pods.go:61] "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running
	I0916 10:34:03.848644   38254 system_pods.go:74] duration metric: took 4.909588ms to wait for pod list to return data ...
	I0916 10:34:03.848654   38254 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:34:03.848728   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:34:03.848736   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.848742   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.848745   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.851110   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.851132   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.851142   38254 round_trippers.go:580]     Audit-Id: 0c4c4e4f-2a84-4e40-8e0b-80cb31bddf7e
	I0916 10:34:03.851149   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.851156   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.851161   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.851167   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.851173   38254 round_trippers.go:580]     Content-Length: 261
	I0916 10:34:03.851177   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.851198   38254 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0e9c2a95-502e-45bd-bfd7-c5d3bafcf61a","resourceVersion":"327","creationTimestamp":"2024-09-16T10:33:26Z"}}]}
	I0916 10:34:03.851350   38254 default_sa.go:45] found service account: "default"
	I0916 10:34:03.851364   38254 default_sa.go:55] duration metric: took 2.70174ms for default service account to be created ...
	I0916 10:34:03.851371   38254 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:34:03.851420   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:34:03.851427   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.851433   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.851437   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.853705   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.853726   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.853735   38254 round_trippers.go:580]     Audit-Id: 04a87712-1128-4c95-a249-6b98ac8a0c1f
	I0916 10:34:03.853739   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.853745   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.853750   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.853755   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.853763   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.854196   38254 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"471","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61610 chars]
	I0916 10:34:03.855991   38254 system_pods.go:86] 8 kube-system pods found
	I0916 10:34:03.856010   38254 system_pods.go:89] "coredns-7c65d6cfc9-wjzzx" [2df1d14c-ae32-4b0d-b3fa-6cdcab40919a] Running
	I0916 10:34:03.856015   38254 system_pods.go:89] "etcd-functional-546931" [7fe96e5a-6112-4e96-981b-b15be906fa34] Running
	I0916 10:34:03.856019   38254 system_pods.go:89] "kindnet-6dtx8" [44bb424a-c279-467b-9256-64be125798f9] Running
	I0916 10:34:03.856024   38254 system_pods.go:89] "kube-apiserver-functional-546931" [19d3920d-b342-4764-b722-116797db07ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:34:03.856033   38254 system_pods.go:89] "kube-controller-manager-functional-546931" [49789d64-6fd1-441c-b9e0-470a0832d127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:34:03.856039   38254 system_pods.go:89] "kube-proxy-kshs9" [c2a1ef0a-22f5-4b04-a7fe-30e019b2687b] Running
	I0916 10:34:03.856043   38254 system_pods.go:89] "kube-scheduler-functional-546931" [40d727b8-b05b-40b1-9837-87741459ef16] Running
	I0916 10:34:03.856051   38254 system_pods.go:89] "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running
	I0916 10:34:03.856057   38254 system_pods.go:126] duration metric: took 4.679727ms to wait for k8s-apps to be running ...
	I0916 10:34:03.856063   38254 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:34:03.856106   38254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:34:03.866975   38254 system_svc.go:56] duration metric: took 10.90356ms WaitForService to wait for kubelet
	I0916 10:34:03.867005   38254 kubeadm.go:582] duration metric: took 14.22299597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:34:03.867022   38254 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:34:03.867097   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes
	I0916 10:34:03.867108   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.867116   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.867119   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.869660   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.869694   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.869702   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.869708   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.869713   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.869718   38254 round_trippers.go:580]     Audit-Id: 53543f02-095e-42de-97a3-11493905ae50
	I0916 10:34:03.869722   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.869727   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.869909   38254 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 6004 chars]
	I0916 10:34:03.870264   38254 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:34:03.870285   38254 node_conditions.go:123] node cpu capacity is 8
	I0916 10:34:03.870297   38254 node_conditions.go:105] duration metric: took 3.26967ms to run NodePressure ...
	I0916 10:34:03.870310   38254 start.go:241] waiting for startup goroutines ...
	I0916 10:34:03.870323   38254 start.go:246] waiting for cluster config update ...
	I0916 10:34:03.870338   38254 start.go:255] writing updated cluster config ...
	I0916 10:34:03.870574   38254 ssh_runner.go:195] Run: rm -f paused
	I0916 10:34:03.877276   38254 out.go:177] * Done! kubectl is now configured to use "functional-546931" cluster and "default" namespace by default
	E0916 10:34:03.878464   38254 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:33:51 functional-546931 crio[2734]: time="2024-09-16 10:33:51.207248970Z" level=info msg="Removing container: 3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0" id=62151ee8-c6a5-464d-8cec-978cf6447b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:33:51 functional-546931 crio[2734]: time="2024-09-16 10:33:51.220509458Z" level=info msg="Removed container 3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0: kube-system/storage-provisioner/storage-provisioner" id=62151ee8-c6a5-464d-8cec-978cf6447b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.127017911Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.130789847Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.130822916Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.130842517Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.134390994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.134424030Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.134441843Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.137780881Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.137811484Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.137824667Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.141166008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.141199175Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.043617802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b19bd0ad-8d17-44c9-a9b4-626c95672d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.043888575Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b19bd0ad-8d17-44c9-a9b4-626c95672d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.044697477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f39dd9d7-ba32-45ca-acb8-d16f771a618c name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.044899874Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f39dd9d7-ba32-45ca-acb8-d16f771a618c name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.045656789Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c30634c2-a767-4c37-8657-e33888d2d54b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.045771114Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.058645949Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2ef8a64a5dc923c464e4178a52da4363133a12d896b2a2bc34be28bf1942ad23/merged/etc/passwd: no such file or directory"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.058689405Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2ef8a64a5dc923c464e4178a52da4363133a12d896b2a2bc34be28bf1942ad23/merged/etc/group: no such file or directory"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.092922558Z" level=info msg="Created container a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b: kube-system/storage-provisioner/storage-provisioner" id=c30634c2-a767-4c37-8657-e33888d2d54b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.093594796Z" level=info msg="Starting container: a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b" id=973194aa-7683-4156-b951-0194505df2af name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.100337706Z" level=info msg="Started container" PID=3687 containerID=a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b description=kube-system/storage-provisioner/storage-provisioner id=973194aa-7683-4156-b951-0194505df2af name=/runtime.v1.RuntimeService/StartContainer sandboxID=2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 seconds ago      Running             kube-scheduler            1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 seconds ago      Running             coredns                   1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	0b7754d27e88e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 seconds ago      Running             kube-apiserver            1                   e87884b43c8cc       kube-apiserver-functional-546931
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 seconds ago      Running             etcd                      1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 seconds ago      Running             kube-controller-manager   1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   14 seconds ago      Running             kindnet-cni               1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 seconds ago      Running             kube-proxy                1                   f14f9778290af       kube-proxy-kshs9
	245fe0ec85c5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       1                   2133c690032da       storage-provisioner
	046d8febeb6af       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   26 seconds ago      Exited              coredns                   0                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	fa5a2b32930d3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   37 seconds ago      Exited              kube-proxy                0                   f14f9778290af       kube-proxy-kshs9
	af58051ec3f44       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   37 seconds ago      Exited              kindnet-cni               0                   4aa3f5aefc537       kindnet-6dtx8
	162127b15fc39       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   48 seconds ago      Exited              kube-controller-manager   0                   878410a4a3694       kube-controller-manager-functional-546931
	f2b587ead9ac6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   48 seconds ago      Exited              etcd                      0                   5b3fe285a2416       etcd-functional-546931
	75f3c10606812       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   48 seconds ago      Exited              kube-scheduler            0                   f41f93397a4f0       kube-scheduler-functional-546931
	9821c40f08076       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   48 seconds ago      Exited              kube-apiserver            0                   e87884b43c8cc       kube-apiserver-functional-546931
	
	
	==> coredns [046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44815 - 46736 "HINFO IN 2073509327164801531.6002369803072694315. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010858245s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:33:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     38s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      38s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 37s                kube-proxy       
	  Normal   Starting                 11s                kube-proxy       
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x7 over 49s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 43s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  43s                kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s                kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s                kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   Starting                 43s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           40s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                27s                kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           9s                 node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:50.922008Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:50.922164Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:50.994403Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:50.995578Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-16T10:33:50.997016Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:33:50.997239Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:50.999359Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:50.997487Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:33:50.997525Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:33:51.496075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb] <==
	{"level":"info","ts":"2024-09-16T10:33:17.828400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:17.828407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:17.829406Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.829961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:17.829996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:17.829958Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:17.830200Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:17.830240Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:17.830391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.830513Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.830541Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.832163Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:17.831931Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:17.832946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:17.833325Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:33:42.078427Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:33:42.078546Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:33:42.078678Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:33:42.078827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:33:42.101370Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:33:42.101428Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:33:42.102916Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:33:42.104829Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:42.104933Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:42.104947Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:34:05 up 16 min,  0 users,  load average: 0.44, 0.43, 0.31
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02] <==
	I0916 10:33:27.696834       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:27.697066       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:27.697210       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:27.697228       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:27.697244       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:28.093776       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:28.093812       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:28.093820       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:28.294858       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:28.294886       1 metrics.go:61] Registering metrics
	I0916 10:33:28.294944       1 controller.go:374] Syncing nftables rules
	I0916 10:33:38.093893       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:38.093946       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	
	
	==> kube-apiserver [0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75] <==
	I0916 10:33:53.025414       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:33:53.025525       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:33:53.025419       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0916 10:33:53.037294       1 controller.go:78] Starting OpenAPI AggregationController
	I0916 10:33:53.110615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:33:53.117071       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:33:53.195239       1 policy_source.go:224] refreshing policies
	I0916 10:33:53.193869       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:33:53.195821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:33:53.194072       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:33:53.194090       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:33:53.194128       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:33:53.194141       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:33:53.194364       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:33:53.194377       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:33:53.196219       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:33:53.197527       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:33:53.197564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:33:53.197596       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:33:53.203974       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:33:53.207909       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:33:53.215549       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:33:54.026461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:33:56.595505       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:33:56.645804       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81] <==
	W0916 10:33:42.091265       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091305       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091210       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091323       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091367       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091380       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091412       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091311       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091453       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091440       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091500       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091505       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091502       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091571       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091568       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091662       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091963       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091981       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0916 10:33:42.091671       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0916 10:33:42.091697       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091760       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.092046       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091824       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.092098       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091912       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02] <==
	I0916 10:33:26.175702       1 shared_informer.go:320] Caches are synced for crt configmap
	I0916 10:33:26.177967       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0916 10:33:26.226006       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:33:26.275043       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:26.279632       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:26.286417       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:26.705856       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:26.793550       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:26.793589       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:26.894010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:27.295783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="490.593748ms"
	I0916 10:33:27.304036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.092516ms"
	I0916 10:33:27.304148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.561µs"
	I0916 10:33:27.315397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.434µs"
	I0916 10:33:27.424337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.189483ms"
	I0916 10:33:27.430800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.42195ms"
	I0916 10:33:27.430920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.394µs"
	I0916 10:33:38.213413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:38.224934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:38.230428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="77.364µs"
	I0916 10:33:38.243910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="90.886µs"
	I0916 10:33:39.144530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.651µs"
	I0916 10:33:39.162343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.700399ms"
	I0916 10:33:39.162441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.723µs"
	I0916 10:33:41.001062       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d] <==
	I0916 10:33:27.653280       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:27.801980       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:27.802051       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:27.821462       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:27.821527       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:27.823372       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:27.823814       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:27.823902       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:27.825081       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:27.825126       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:27.825165       1 config.go:328] "Starting node config controller"
	I0916 10:33:27.825175       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:27.825157       1 config.go:199] "Starting service config controller"
	I0916 10:33:27.825211       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:27.926184       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:33:27.926206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:27.926251       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534] <==
	W0916 10:33:19.419287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:33:19.419370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:33:19.419487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:33:19.419567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419219       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:19.419640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:33:19.419280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:33:19.419721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.228031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:20.228078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.241756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:20.241792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.275701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:20.275752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.285352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:20.285403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.338295       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:33:20.338343       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:33:20.367158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:33:20.367201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 10:33:23.315206       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:33:42.078772       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163358    1678 status_manager.go:851] "Failed to get status for pod" podUID="c02f70efafdd9ad1683640c8d3761d1d" pod="kube-system/kube-controller-manager-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163562    1678 status_manager.go:851] "Failed to get status for pod" podUID="4f74e884ad630d68b59e0dbdb6055584" pod="kube-system/etcd-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163748    1678 status_manager.go:851] "Failed to get status for pod" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" pod="kube-system/kube-apiserver-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163987    1678 status_manager.go:851] "Failed to get status for pod" podUID="adb8a765a0d6f587897c42f69e87ac66" pod="kube-system/kube-scheduler-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164146    1678 scope.go:117] "RemoveContainer" containerID="046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164272    1678 status_manager.go:851] "Failed to get status for pod" podUID="44bb424a-c279-467b-9256-64be125798f9" pod="kube-system/kindnet-6dtx8" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-6dtx8\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164564    1678 status_manager.go:851] "Failed to get status for pod" podUID="c2a1ef0a-22f5-4b04-a7fe-30e019b2687b" pod="kube-system/kube-proxy-kshs9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kshs9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164802    1678 status_manager.go:851] "Failed to get status for pod" podUID="a7e94614-567e-47ba-a51a-426f09198dba" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.165088    1678 status_manager.go:851] "Failed to get status for pod" podUID="44bb424a-c279-467b-9256-64be125798f9" pod="kube-system/kindnet-6dtx8" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-6dtx8\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.165321    1678 status_manager.go:851] "Failed to get status for pod" podUID="c2a1ef0a-22f5-4b04-a7fe-30e019b2687b" pod="kube-system/kube-proxy-kshs9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kshs9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166136    1678 status_manager.go:851] "Failed to get status for pod" podUID="2df1d14c-ae32-4b0d-b3fa-6cdcab40919a" pod="kube-system/coredns-7c65d6cfc9-wjzzx" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-wjzzx\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166375    1678 status_manager.go:851] "Failed to get status for pod" podUID="a7e94614-567e-47ba-a51a-426f09198dba" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166575    1678 status_manager.go:851] "Failed to get status for pod" podUID="c02f70efafdd9ad1683640c8d3761d1d" pod="kube-system/kube-controller-manager-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166740    1678 status_manager.go:851] "Failed to get status for pod" podUID="4f74e884ad630d68b59e0dbdb6055584" pod="kube-system/etcd-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166955    1678 status_manager.go:851] "Failed to get status for pod" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" pod="kube-system/kube-apiserver-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.167270    1678 status_manager.go:851] "Failed to get status for pod" podUID="adb8a765a0d6f587897c42f69e87ac66" pod="kube-system/kube-scheduler-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: E0916 10:33:50.294098    1678 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-546931.17f5b2fabfbdf074  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-546931,UID:4f74e884ad630d68b59e0dbdb6055584,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-546931,},FirstTimestamp:2024-09-16 10:33:42.194917492 +0000 UTC m=+20.238825086,LastTimestamp:2024-09-16 10:33:42.194917492 +0000 UTC m=+20.238825086,Count:1,Type:Warning,EventTime:000
1-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-546931,}"
	Sep 16 10:33:51 functional-546931 kubelet[1678]: I0916 10:33:51.205790    1678 scope.go:117] "RemoveContainer" containerID="3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0"
	Sep 16 10:33:51 functional-546931 kubelet[1678]: I0916 10:33:51.206016    1678 scope.go:117] "RemoveContainer" containerID="245fe0ec85c5b458982c183eaaf1a0eb8937ac0b38e254df02ec5726c325717c"
	Sep 16 10:33:51 functional-546931 kubelet[1678]: E0916 10:33:51.206166    1678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a7e94614-567e-47ba-a51a-426f09198dba)\"" pod="kube-system/storage-provisioner" podUID="a7e94614-567e-47ba-a51a-426f09198dba"
	Sep 16 10:33:52 functional-546931 kubelet[1678]: E0916 10:33:52.113293    1678 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482832113062114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:33:52 functional-546931 kubelet[1678]: E0916 10:33:52.113354    1678 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482832113062114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:02 functional-546931 kubelet[1678]: I0916 10:34:02.043015    1678 scope.go:117] "RemoveContainer" containerID="245fe0ec85c5b458982c183eaaf1a0eb8937ac0b38e254df02ec5726c325717c"
	Sep 16 10:34:02 functional-546931 kubelet[1678]: E0916 10:34:02.115224    1678 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482842114964982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:02 functional-546931 kubelet[1678]: E0916 10:34:02.115263    1678 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482842114964982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [245fe0ec85c5b458982c183eaaf1a0eb8937ac0b38e254df02ec5726c325717c] <==
	I0916 10:33:50.321558       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:33:50.323516       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (413.077µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubeContext (2.35s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-546931 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-546931 get po -A: fork/exec /usr/local/bin/kubectl: exec format error (314.598µs)
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-546931 get po -A" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-546931 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.48040569s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | enable headlamp                | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:26 UTC |
	|         | -p addons-821781               |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| addons  | addons-821781 addons disable   | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | headlamp --alsologtostderr     |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-821781 addons disable   | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-821781 addons           | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop    | -p addons-821781               | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| addons  | enable dashboard -p            | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-821781                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-821781                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-821781                  |                   |         |         |                     |                     |
	| delete  | -p addons-821781               | addons-821781     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| start   | -p nospam-530798 -n=1          | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-530798   |                   |         |         |                     |                     |
	|         | --driver=docker                |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | /tmp/nospam-530798 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | /tmp/nospam-530798 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | /tmp/nospam-530798 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 pause       |                   |         |         |                     |                     |
	| pause   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 pause       |                   |         |         |                     |                     |
	| pause   | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 pause       |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop        |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop        |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir        | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-530798               | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	| start   | -p functional-546931           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-546931           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:34 UTC |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:33:40
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:33:40.770875   38254 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:33:40.771214   38254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:33:40.771225   38254 out.go:358] Setting ErrFile to fd 2...
	I0916 10:33:40.771229   38254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:33:40.771468   38254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:33:40.772058   38254 out.go:352] Setting JSON to false
	I0916 10:33:40.772994   38254 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":961,"bootTime":1726481860,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:33:40.773092   38254 start.go:139] virtualization: kvm guest
	I0916 10:33:40.775582   38254 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:33:40.776810   38254 notify.go:220] Checking for updates...
	I0916 10:33:40.776824   38254 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:33:40.778328   38254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:33:40.779827   38254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:40.781225   38254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:33:40.782854   38254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:33:40.784657   38254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:33:40.787127   38254 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:33:40.787260   38254 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:33:40.811874   38254 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:33:40.812025   38254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:33:40.868273   38254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:33:40.858814631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:33:40.868372   38254 docker.go:318] overlay module found
	I0916 10:33:40.870598   38254 out.go:177] * Using the docker driver based on existing profile
	I0916 10:33:40.872000   38254 start.go:297] selected driver: docker
	I0916 10:33:40.872020   38254 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:33:40.872110   38254 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:33:40.872236   38254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:33:40.926447   38254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:33:40.915860884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:33:40.927025   38254 cni.go:84] Creating CNI manager for ""
	I0916 10:33:40.927063   38254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:33:40.927104   38254 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:33:40.929251   38254 out.go:177] * Starting "functional-546931" primary control-plane node in "functional-546931" cluster
	I0916 10:33:40.930726   38254 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:33:40.932156   38254 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:33:40.933438   38254 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:33:40.933468   38254 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:33:40.933483   38254 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:33:40.933499   38254 cache.go:56] Caching tarball of preloaded images
	I0916 10:33:40.933594   38254 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:33:40.933606   38254 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:33:40.933720   38254 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/config.json ...
	W0916 10:33:40.954493   38254 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:33:40.954521   38254 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:33:40.954610   38254 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:33:40.954627   38254 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:33:40.954631   38254 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:33:40.954639   38254 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:33:40.954646   38254 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:33:40.956035   38254 image.go:273] response: 
	I0916 10:33:41.014396   38254 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:33:41.014445   38254 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:33:41.014478   38254 start.go:360] acquireMachinesLock for functional-546931: {Name:mk0ba09111db367b90aa515f201f345e63335cec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:33:41.014562   38254 start.go:364] duration metric: took 44.876µs to acquireMachinesLock for "functional-546931"
	I0916 10:33:41.014581   38254 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:33:41.014588   38254 fix.go:54] fixHost starting: 
	I0916 10:33:41.014788   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:41.032464   38254 fix.go:112] recreateIfNeeded on functional-546931: state=Running err=<nil>
	W0916 10:33:41.032501   38254 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:33:41.034913   38254 out.go:177] * Updating the running docker "functional-546931" container ...
	I0916 10:33:41.036263   38254 machine.go:93] provisionDockerMachine start ...
	I0916 10:33:41.036349   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.055346   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.055594   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.055611   38254 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:33:41.192774   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546931
	
	I0916 10:33:41.192811   38254 ubuntu.go:169] provisioning hostname "functional-546931"
	I0916 10:33:41.192875   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.211900   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.212128   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.212148   38254 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546931 && echo "functional-546931" | sudo tee /etc/hostname
	I0916 10:33:41.360228   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546931
	
	I0916 10:33:41.360314   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.377015   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.377240   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.377259   38254 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546931/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:33:41.509419   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:33:41.509453   38254 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:33:41.509476   38254 ubuntu.go:177] setting up certificates
	I0916 10:33:41.509484   38254 provision.go:84] configureAuth start
	I0916 10:33:41.509533   38254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546931
	I0916 10:33:41.527045   38254 provision.go:143] copyHostCerts
	I0916 10:33:41.527081   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:33:41.527116   38254 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:33:41.527126   38254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:33:41.527187   38254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:33:41.527269   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:33:41.527294   38254 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:33:41.527304   38254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:33:41.527343   38254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:33:41.527399   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:33:41.527417   38254 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:33:41.527424   38254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:33:41.527446   38254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:33:41.527495   38254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.functional-546931 san=[127.0.0.1 192.168.49.2 functional-546931 localhost minikube]
	I0916 10:33:41.723877   38254 provision.go:177] copyRemoteCerts
	I0916 10:33:41.723943   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:33:41.723990   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.742923   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:41.842009   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:33:41.842070   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:33:41.863475   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:33:41.863546   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:33:41.885728   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:33:41.885808   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:33:41.908294   38254 provision.go:87] duration metric: took 398.792469ms to configureAuth
	I0916 10:33:41.908321   38254 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:33:41.908487   38254 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:33:41.908581   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:41.926776   38254 main.go:141] libmachine: Using SSH client type: native
	I0916 10:33:41.926981   38254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:33:41.926998   38254 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:33:47.267116   38254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:33:47.267143   38254 machine.go:96] duration metric: took 6.230864456s to provisionDockerMachine
	I0916 10:33:47.267157   38254 start.go:293] postStartSetup for "functional-546931" (driver="docker")
	I0916 10:33:47.267171   38254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:33:47.267223   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:33:47.267257   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.284010   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.377932   38254 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:33:47.380909   38254 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:33:47.380929   38254 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:33:47.380936   38254 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:33:47.380944   38254 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:33:47.380950   38254 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:33:47.380956   38254 command_runner.go:130] > ID=ubuntu
	I0916 10:33:47.380961   38254 command_runner.go:130] > ID_LIKE=debian
	I0916 10:33:47.380968   38254 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:33:47.380977   38254 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:33:47.380987   38254 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:33:47.381000   38254 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:33:47.381006   38254 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:33:47.381061   38254 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:33:47.381093   38254 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:33:47.381106   38254 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:33:47.381118   38254 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:33:47.381131   38254 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:33:47.381194   38254 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:33:47.381292   38254 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:33:47.381305   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:33:47.381411   38254 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts -> hosts in /etc/test/nested/copy/11208
	I0916 10:33:47.381419   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts -> /etc/test/nested/copy/11208/hosts
	I0916 10:33:47.381467   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11208
	I0916 10:33:47.389827   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:33:47.411941   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts --> /etc/test/nested/copy/11208/hosts (40 bytes)
	I0916 10:33:47.433973   38254 start.go:296] duration metric: took 166.799134ms for postStartSetup
	I0916 10:33:47.434042   38254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:33:47.434075   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.451092   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.542209   38254 command_runner.go:130] > 30%
	I0916 10:33:47.542290   38254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:33:47.546531   38254 command_runner.go:130] > 205G
	I0916 10:33:47.546731   38254 fix.go:56] duration metric: took 6.5321272s for fixHost
	I0916 10:33:47.546753   38254 start.go:83] releasing machines lock for "functional-546931", held for 6.53217868s
	I0916 10:33:47.546819   38254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546931
	I0916 10:33:47.563606   38254 ssh_runner.go:195] Run: cat /version.json
	I0916 10:33:47.563637   38254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:33:47.563674   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.563716   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:47.581622   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.582240   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:47.676950   38254 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:33:47.677144   38254 ssh_runner.go:195] Run: systemctl --version
	I0916 10:33:47.751671   38254 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:33:47.751721   38254 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:33:47.751745   38254 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:33:47.751805   38254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:33:47.889831   38254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:33:47.894036   38254 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf.mk_disabled
	I0916 10:33:47.894064   38254 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:33:47.894074   38254 command_runner.go:130] > Device: 37h/55d	Inode: 535096      Links: 1
	I0916 10:33:47.894083   38254 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:33:47.894089   38254 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:33:47.894094   38254 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:33:47.894099   38254 command_runner.go:130] > Change: 2024-09-16 10:33:10.369617623 +0000
	I0916 10:33:47.894104   38254 command_runner.go:130] >  Birth: 2024-09-16 10:33:10.369617623 +0000
	I0916 10:33:47.894157   38254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:33:47.902355   38254 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:33:47.902411   38254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:33:47.910389   38254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:33:47.910416   38254 start.go:495] detecting cgroup driver to use...
	I0916 10:33:47.910444   38254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:33:47.910486   38254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:33:47.921885   38254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:33:47.932184   38254 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:33:47.932238   38254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:33:47.944255   38254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:33:47.954927   38254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:33:48.063649   38254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:33:48.173240   38254 docker.go:233] disabling docker service ...
	I0916 10:33:48.173304   38254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:33:48.185048   38254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:33:48.195758   38254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:33:48.304682   38254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:33:48.409454   38254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:33:48.420073   38254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:33:48.434731   38254 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:33:48.434777   38254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:33:48.434822   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.443602   38254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:33:48.443670   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.452457   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.461402   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.470379   38254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:33:48.479040   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.487789   38254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.496160   38254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:33:48.504870   38254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:33:48.511765   38254 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:33:48.512422   38254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:33:48.520403   38254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:33:48.628460   38254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:33:48.760479   38254 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:33:48.760539   38254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:33:48.764057   38254 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:33:48.764081   38254 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:33:48.764090   38254 command_runner.go:130] > Device: 40h/64d	Inode: 556         Links: 1
	I0916 10:33:48.764100   38254 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:33:48.764107   38254 command_runner.go:130] > Access: 2024-09-16 10:33:48.724442048 +0000
	I0916 10:33:48.764121   38254 command_runner.go:130] > Modify: 2024-09-16 10:33:48.724442048 +0000
	I0916 10:33:48.764134   38254 command_runner.go:130] > Change: 2024-09-16 10:33:48.724442048 +0000
	I0916 10:33:48.764144   38254 command_runner.go:130] >  Birth: -
	I0916 10:33:48.764168   38254 start.go:563] Will wait 60s for crictl version
	I0916 10:33:48.764206   38254 ssh_runner.go:195] Run: which crictl
	I0916 10:33:48.767272   38254 command_runner.go:130] > /usr/bin/crictl
	I0916 10:33:48.767358   38254 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:33:48.798589   38254 command_runner.go:130] > Version:  0.1.0
	I0916 10:33:48.798608   38254 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:33:48.798619   38254 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:33:48.798625   38254 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:33:48.800498   38254 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:33:48.800571   38254 ssh_runner.go:195] Run: crio --version
	I0916 10:33:48.833121   38254 command_runner.go:130] > crio version 1.24.6
	I0916 10:33:48.833142   38254 command_runner.go:130] > Version:          1.24.6
	I0916 10:33:48.833150   38254 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:33:48.833154   38254 command_runner.go:130] > GitTreeState:     clean
	I0916 10:33:48.833160   38254 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:33:48.833165   38254 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:33:48.833170   38254 command_runner.go:130] > Compiler:         gc
	I0916 10:33:48.833174   38254 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:33:48.833179   38254 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:33:48.833186   38254 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:33:48.833190   38254 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:33:48.833199   38254 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:33:48.834514   38254 ssh_runner.go:195] Run: crio --version
	I0916 10:33:48.867161   38254 command_runner.go:130] > crio version 1.24.6
	I0916 10:33:48.867194   38254 command_runner.go:130] > Version:          1.24.6
	I0916 10:33:48.867202   38254 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:33:48.867206   38254 command_runner.go:130] > GitTreeState:     clean
	I0916 10:33:48.867212   38254 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:33:48.867216   38254 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:33:48.867220   38254 command_runner.go:130] > Compiler:         gc
	I0916 10:33:48.867225   38254 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:33:48.867230   38254 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:33:48.867237   38254 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:33:48.867244   38254 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:33:48.867249   38254 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:33:48.870738   38254 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:33:48.872074   38254 cli_runner.go:164] Run: docker network inspect functional-546931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:33:48.888862   38254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:33:48.892499   38254 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0916 10:33:48.892597   38254 kubeadm.go:883] updating cluster {Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:33:48.892702   38254 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:33:48.892742   38254 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:33:48.927357   38254 command_runner.go:130] > {
	I0916 10:33:48.927387   38254 command_runner.go:130] >   "images": [
	I0916 10:33:48.927392   38254 command_runner.go:130] >     {
	I0916 10:33:48.927400   38254 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:33:48.927405   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927411   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:33:48.927415   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927419   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927428   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:33:48.927435   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:33:48.927439   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927443   38254 command_runner.go:130] >       "size": "87190579",
	I0916 10:33:48.927447   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927451   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927460   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927464   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927468   38254 command_runner.go:130] >     },
	I0916 10:33:48.927471   38254 command_runner.go:130] >     {
	I0916 10:33:48.927477   38254 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:33:48.927484   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927490   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:33:48.927494   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927497   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927505   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:33:48.927520   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:33:48.927523   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927530   38254 command_runner.go:130] >       "size": "31470524",
	I0916 10:33:48.927536   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927541   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927547   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927551   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927555   38254 command_runner.go:130] >     },
	I0916 10:33:48.927560   38254 command_runner.go:130] >     {
	I0916 10:33:48.927568   38254 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:33:48.927572   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927580   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:33:48.927583   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927587   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927595   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:33:48.927604   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:33:48.927608   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927612   38254 command_runner.go:130] >       "size": "63273227",
	I0916 10:33:48.927618   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927622   38254 command_runner.go:130] >       "username": "nonroot",
	I0916 10:33:48.927628   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927632   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927635   38254 command_runner.go:130] >     },
	I0916 10:33:48.927639   38254 command_runner.go:130] >     {
	I0916 10:33:48.927644   38254 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:33:48.927649   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927654   38254 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:33:48.927664   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927669   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927675   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:33:48.927686   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:33:48.927692   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927696   38254 command_runner.go:130] >       "size": "149009664",
	I0916 10:33:48.927702   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.927706   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.927711   38254 command_runner.go:130] >       },
	I0916 10:33:48.927715   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927719   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927723   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927727   38254 command_runner.go:130] >     },
	I0916 10:33:48.927730   38254 command_runner.go:130] >     {
	I0916 10:33:48.927737   38254 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:33:48.927743   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927748   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:33:48.927752   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927756   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927763   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:33:48.927774   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:33:48.927781   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927785   38254 command_runner.go:130] >       "size": "95237600",
	I0916 10:33:48.927791   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.927794   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.927798   38254 command_runner.go:130] >       },
	I0916 10:33:48.927802   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927808   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927812   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927815   38254 command_runner.go:130] >     },
	I0916 10:33:48.927818   38254 command_runner.go:130] >     {
	I0916 10:33:48.927824   38254 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:33:48.927830   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927835   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:33:48.927839   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927843   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927851   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:33:48.927861   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:33:48.927866   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927870   38254 command_runner.go:130] >       "size": "89437508",
	I0916 10:33:48.927875   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.927880   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.927884   38254 command_runner.go:130] >       },
	I0916 10:33:48.927887   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927891   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927897   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927901   38254 command_runner.go:130] >     },
	I0916 10:33:48.927905   38254 command_runner.go:130] >     {
	I0916 10:33:48.927913   38254 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:33:48.927920   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.927925   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:33:48.927928   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927933   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.927940   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:33:48.927949   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:33:48.927953   38254 command_runner.go:130] >       ],
	I0916 10:33:48.927957   38254 command_runner.go:130] >       "size": "92733849",
	I0916 10:33:48.927963   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.927967   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.927973   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.927977   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.927980   38254 command_runner.go:130] >     },
	I0916 10:33:48.927987   38254 command_runner.go:130] >     {
	I0916 10:33:48.927993   38254 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:33:48.927998   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.928003   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:33:48.928008   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928011   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.928028   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:33:48.928038   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:33:48.928041   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928045   38254 command_runner.go:130] >       "size": "68420934",
	I0916 10:33:48.928049   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.928053   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.928056   38254 command_runner.go:130] >       },
	I0916 10:33:48.928061   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.928064   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.928070   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.928073   38254 command_runner.go:130] >     },
	I0916 10:33:48.928079   38254 command_runner.go:130] >     {
	I0916 10:33:48.928087   38254 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:33:48.928093   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.928098   38254 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:33:48.928101   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928105   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.928112   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:33:48.928124   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:33:48.928128   38254 command_runner.go:130] >       ],
	I0916 10:33:48.928152   38254 command_runner.go:130] >       "size": "742080",
	I0916 10:33:48.928163   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.928167   38254 command_runner.go:130] >         "value": "65535"
	I0916 10:33:48.928170   38254 command_runner.go:130] >       },
	I0916 10:33:48.928174   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.928178   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.928182   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.928185   38254 command_runner.go:130] >     }
	I0916 10:33:48.928188   38254 command_runner.go:130] >   ]
	I0916 10:33:48.928191   38254 command_runner.go:130] > }
	I0916 10:33:48.929512   38254 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:33:48.929532   38254 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:33:48.929573   38254 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:33:48.959189   38254 command_runner.go:130] > {
	I0916 10:33:48.959209   38254 command_runner.go:130] >   "images": [
	I0916 10:33:48.959213   38254 command_runner.go:130] >     {
	I0916 10:33:48.959222   38254 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:33:48.959227   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959233   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:33:48.959240   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959243   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959252   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:33:48.959259   38254 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:33:48.959265   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959270   38254 command_runner.go:130] >       "size": "87190579",
	I0916 10:33:48.959277   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959281   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959286   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959290   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959296   38254 command_runner.go:130] >     },
	I0916 10:33:48.959299   38254 command_runner.go:130] >     {
	I0916 10:33:48.959305   38254 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:33:48.959312   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959317   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:33:48.959324   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959328   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959335   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:33:48.959344   38254 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:33:48.959348   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959357   38254 command_runner.go:130] >       "size": "31470524",
	I0916 10:33:48.959363   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959382   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959391   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959395   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959399   38254 command_runner.go:130] >     },
	I0916 10:33:48.959403   38254 command_runner.go:130] >     {
	I0916 10:33:48.959410   38254 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:33:48.959414   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959419   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:33:48.959425   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959428   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959435   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:33:48.959455   38254 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:33:48.959461   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959465   38254 command_runner.go:130] >       "size": "63273227",
	I0916 10:33:48.959469   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959474   38254 command_runner.go:130] >       "username": "nonroot",
	I0916 10:33:48.959478   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959482   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959485   38254 command_runner.go:130] >     },
	I0916 10:33:48.959489   38254 command_runner.go:130] >     {
	I0916 10:33:48.959495   38254 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:33:48.959500   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959506   38254 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:33:48.959511   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959515   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959521   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:33:48.959534   38254 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:33:48.959538   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959542   38254 command_runner.go:130] >       "size": "149009664",
	I0916 10:33:48.959546   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959550   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959553   38254 command_runner.go:130] >       },
	I0916 10:33:48.959559   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959564   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959577   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959583   38254 command_runner.go:130] >     },
	I0916 10:33:48.959586   38254 command_runner.go:130] >     {
	I0916 10:33:48.959592   38254 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:33:48.959598   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959603   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:33:48.959609   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959614   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959623   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:33:48.959631   38254 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:33:48.959636   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959641   38254 command_runner.go:130] >       "size": "95237600",
	I0916 10:33:48.959645   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959649   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959652   38254 command_runner.go:130] >       },
	I0916 10:33:48.959656   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959660   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959663   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959667   38254 command_runner.go:130] >     },
	I0916 10:33:48.959672   38254 command_runner.go:130] >     {
	I0916 10:33:48.959678   38254 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:33:48.959682   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959687   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:33:48.959690   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959694   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959701   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:33:48.959708   38254 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:33:48.959711   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959722   38254 command_runner.go:130] >       "size": "89437508",
	I0916 10:33:48.959725   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959729   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959732   38254 command_runner.go:130] >       },
	I0916 10:33:48.959737   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959740   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959744   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959747   38254 command_runner.go:130] >     },
	I0916 10:33:48.959750   38254 command_runner.go:130] >     {
	I0916 10:33:48.959756   38254 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:33:48.959761   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959766   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:33:48.959772   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959776   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959786   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:33:48.959794   38254 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:33:48.959799   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959804   38254 command_runner.go:130] >       "size": "92733849",
	I0916 10:33:48.959810   38254 command_runner.go:130] >       "uid": null,
	I0916 10:33:48.959814   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959818   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959822   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959826   38254 command_runner.go:130] >     },
	I0916 10:33:48.959829   38254 command_runner.go:130] >     {
	I0916 10:33:48.959835   38254 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:33:48.959841   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959846   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:33:48.959850   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959854   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959870   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:33:48.959880   38254 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:33:48.959883   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959887   38254 command_runner.go:130] >       "size": "68420934",
	I0916 10:33:48.959891   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.959898   38254 command_runner.go:130] >         "value": "0"
	I0916 10:33:48.959901   38254 command_runner.go:130] >       },
	I0916 10:33:48.959922   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.959929   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.959933   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.959937   38254 command_runner.go:130] >     },
	I0916 10:33:48.959941   38254 command_runner.go:130] >     {
	I0916 10:33:48.959947   38254 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:33:48.959953   38254 command_runner.go:130] >       "repoTags": [
	I0916 10:33:48.959958   38254 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:33:48.959964   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959969   38254 command_runner.go:130] >       "repoDigests": [
	I0916 10:33:48.959976   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:33:48.959985   38254 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:33:48.959988   38254 command_runner.go:130] >       ],
	I0916 10:33:48.959992   38254 command_runner.go:130] >       "size": "742080",
	I0916 10:33:48.959996   38254 command_runner.go:130] >       "uid": {
	I0916 10:33:48.960000   38254 command_runner.go:130] >         "value": "65535"
	I0916 10:33:48.960003   38254 command_runner.go:130] >       },
	I0916 10:33:48.960007   38254 command_runner.go:130] >       "username": "",
	I0916 10:33:48.960014   38254 command_runner.go:130] >       "spec": null,
	I0916 10:33:48.960019   38254 command_runner.go:130] >       "pinned": false
	I0916 10:33:48.960022   38254 command_runner.go:130] >     }
	I0916 10:33:48.960025   38254 command_runner.go:130] >   ]
	I0916 10:33:48.960029   38254 command_runner.go:130] > }
	I0916 10:33:48.961474   38254 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:33:48.961496   38254 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:33:48.961506   38254 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 crio true true} ...
	I0916 10:33:48.961618   38254 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-546931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:33:48.961707   38254 ssh_runner.go:195] Run: crio config
	I0916 10:33:48.997137   38254 command_runner.go:130] ! time="2024-09-16 10:33:48.996693989Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0916 10:33:48.997172   38254 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:33:49.002096   38254 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:33:49.002120   38254 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:33:49.002130   38254 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:33:49.002135   38254 command_runner.go:130] > #
	I0916 10:33:49.002146   38254 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:33:49.002155   38254 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:33:49.002163   38254 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:33:49.002175   38254 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:33:49.002182   38254 command_runner.go:130] > # reload'.
	I0916 10:33:49.002196   38254 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:33:49.002210   38254 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:33:49.002221   38254 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:33:49.002234   38254 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:33:49.002243   38254 command_runner.go:130] > [crio]
	I0916 10:33:49.002255   38254 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:33:49.002266   38254 command_runner.go:130] > # containers images, in this directory.
	I0916 10:33:49.002277   38254 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0916 10:33:49.002286   38254 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:33:49.002293   38254 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0916 10:33:49.002302   38254 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:33:49.002317   38254 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:33:49.002324   38254 command_runner.go:130] > # storage_driver = "vfs"
	I0916 10:33:49.002337   38254 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:33:49.002347   38254 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:33:49.002356   38254 command_runner.go:130] > # storage_option = [
	I0916 10:33:49.002363   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002376   38254 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:33:49.002390   38254 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:33:49.002395   38254 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:33:49.002403   38254 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:33:49.002410   38254 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:33:49.002416   38254 command_runner.go:130] > # always happen on a node reboot
	I0916 10:33:49.002421   38254 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:33:49.002427   38254 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:33:49.002440   38254 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:33:49.002451   38254 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:33:49.002459   38254 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0916 10:33:49.002474   38254 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:33:49.002489   38254 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:33:49.002497   38254 command_runner.go:130] > # internal_wipe = true
	I0916 10:33:49.002510   38254 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:33:49.002520   38254 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:33:49.002528   38254 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:33:49.002535   38254 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:33:49.002541   38254 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:33:49.002548   38254 command_runner.go:130] > [crio.api]
	I0916 10:33:49.002554   38254 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:33:49.002559   38254 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:33:49.002567   38254 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:33:49.002571   38254 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:33:49.002578   38254 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:33:49.002585   38254 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:33:49.002589   38254 command_runner.go:130] > # stream_port = "0"
	I0916 10:33:49.002597   38254 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:33:49.002601   38254 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:33:49.002609   38254 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:33:49.002613   38254 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:33:49.002619   38254 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:33:49.002627   38254 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:33:49.002636   38254 command_runner.go:130] > # minutes.
	I0916 10:33:49.002640   38254 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:33:49.002648   38254 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:33:49.002654   38254 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:33:49.002658   38254 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:33:49.002664   38254 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:33:49.002670   38254 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:33:49.002676   38254 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:33:49.002680   38254 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:33:49.002688   38254 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:33:49.002694   38254 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0916 10:33:49.002701   38254 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:33:49.002708   38254 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0916 10:33:49.002724   38254 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:33:49.002732   38254 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:33:49.002736   38254 command_runner.go:130] > [crio.runtime]
	I0916 10:33:49.002742   38254 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:33:49.002749   38254 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:33:49.002753   38254 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:33:49.002760   38254 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:33:49.002766   38254 command_runner.go:130] > # default_ulimits = [
	I0916 10:33:49.002769   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002775   38254 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:33:49.002781   38254 command_runner.go:130] > # no_pivot = false
	I0916 10:33:49.002787   38254 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:33:49.002798   38254 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:33:49.002803   38254 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:33:49.002811   38254 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:33:49.002816   38254 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:33:49.002825   38254 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:33:49.002829   38254 command_runner.go:130] > # conmon = ""
	I0916 10:33:49.002836   38254 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:33:49.002842   38254 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:33:49.002849   38254 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:33:49.002855   38254 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:33:49.002860   38254 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:33:49.002869   38254 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:33:49.002873   38254 command_runner.go:130] > # conmon_env = [
	I0916 10:33:49.002885   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002893   38254 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:33:49.002898   38254 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:33:49.002905   38254 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:33:49.002908   38254 command_runner.go:130] > # default_env = [
	I0916 10:33:49.002912   38254 command_runner.go:130] > # ]
	I0916 10:33:49.002917   38254 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:33:49.002923   38254 command_runner.go:130] > # selinux = false
	I0916 10:33:49.002930   38254 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:33:49.002939   38254 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:33:49.002947   38254 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:33:49.002951   38254 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:33:49.002956   38254 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:33:49.002964   38254 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:33:49.002970   38254 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:33:49.002977   38254 command_runner.go:130] > # which might increase security.
	I0916 10:33:49.002981   38254 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0916 10:33:49.002987   38254 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:33:49.002996   38254 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:33:49.003002   38254 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:33:49.003010   38254 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:33:49.003016   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.003023   38254 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:33:49.003030   38254 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:33:49.003037   38254 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:33:49.003041   38254 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:33:49.003047   38254 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:33:49.003053   38254 command_runner.go:130] > # irqbalance daemon.
	I0916 10:33:49.003058   38254 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:33:49.003066   38254 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:33:49.003073   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.003077   38254 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:33:49.003083   38254 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:33:49.003088   38254 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:33:49.003094   38254 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:33:49.003100   38254 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:33:49.003106   38254 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:33:49.003114   38254 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:33:49.003118   38254 command_runner.go:130] > # will be added.
	I0916 10:33:49.003124   38254 command_runner.go:130] > # default_capabilities = [
	I0916 10:33:49.003128   38254 command_runner.go:130] > # 	"CHOWN",
	I0916 10:33:49.003135   38254 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:33:49.003139   38254 command_runner.go:130] > # 	"FSETID",
	I0916 10:33:49.003142   38254 command_runner.go:130] > # 	"FOWNER",
	I0916 10:33:49.003146   38254 command_runner.go:130] > # 	"SETGID",
	I0916 10:33:49.003149   38254 command_runner.go:130] > # 	"SETUID",
	I0916 10:33:49.003153   38254 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:33:49.003157   38254 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:33:49.003163   38254 command_runner.go:130] > # 	"KILL",
	I0916 10:33:49.003166   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003173   38254 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:33:49.003182   38254 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:33:49.003186   38254 command_runner.go:130] > # add_inheritable_capabilities = true
	I0916 10:33:49.003192   38254 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:33:49.003199   38254 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:33:49.003203   38254 command_runner.go:130] > default_sysctls = [
	I0916 10:33:49.003208   38254 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:33:49.003212   38254 command_runner.go:130] > ]
	I0916 10:33:49.003217   38254 command_runner.go:130] > # List of devices on the host that a
	I0916 10:33:49.003225   38254 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:33:49.003229   38254 command_runner.go:130] > # allowed_devices = [
	I0916 10:33:49.003236   38254 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:33:49.003239   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003244   38254 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:33:49.003263   38254 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:33:49.003271   38254 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:33:49.003277   38254 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:33:49.003283   38254 command_runner.go:130] > # additional_devices = [
	I0916 10:33:49.003286   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003291   38254 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:33:49.003297   38254 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:33:49.003301   38254 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:33:49.003308   38254 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:33:49.003311   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003317   38254 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:33:49.003326   38254 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:33:49.003330   38254 command_runner.go:130] > # Defaults to false.
	I0916 10:33:49.003335   38254 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:33:49.003341   38254 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:33:49.003349   38254 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:33:49.003353   38254 command_runner.go:130] > # hooks_dir = [
	I0916 10:33:49.003359   38254 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:33:49.003362   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003368   38254 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:33:49.003376   38254 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:33:49.003382   38254 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:33:49.003387   38254 command_runner.go:130] > #
	I0916 10:33:49.003393   38254 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:33:49.003401   38254 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:33:49.003407   38254 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:33:49.003410   38254 command_runner.go:130] > #
	I0916 10:33:49.003416   38254 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:33:49.003424   38254 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:33:49.003430   38254 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:33:49.003437   38254 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:33:49.003441   38254 command_runner.go:130] > #
	I0916 10:33:49.003447   38254 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:33:49.003453   38254 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:33:49.003461   38254 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:33:49.003465   38254 command_runner.go:130] > # pids_limit = 0
	I0916 10:33:49.003471   38254 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:33:49.003479   38254 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:33:49.003485   38254 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:33:49.003495   38254 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:33:49.003501   38254 command_runner.go:130] > # log_size_max = -1
	I0916 10:33:49.003508   38254 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0916 10:33:49.003514   38254 command_runner.go:130] > # log_to_journald = false
	I0916 10:33:49.003520   38254 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:33:49.003528   38254 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:33:49.003533   38254 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:33:49.003540   38254 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:33:49.003545   38254 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:33:49.003551   38254 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:33:49.003557   38254 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:33:49.003563   38254 command_runner.go:130] > # read_only = false
	I0916 10:33:49.003569   38254 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:33:49.003577   38254 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:33:49.003581   38254 command_runner.go:130] > # live configuration reload.
	I0916 10:33:49.003587   38254 command_runner.go:130] > # log_level = "info"
	I0916 10:33:49.003593   38254 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:33:49.003600   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.003604   38254 command_runner.go:130] > # log_filter = ""
	I0916 10:33:49.003610   38254 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:33:49.003616   38254 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:33:49.003619   38254 command_runner.go:130] > # separated by comma.
	I0916 10:33:49.003624   38254 command_runner.go:130] > # uid_mappings = ""
	I0916 10:33:49.003630   38254 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:33:49.003643   38254 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:33:49.003650   38254 command_runner.go:130] > # separated by comma.
	I0916 10:33:49.003655   38254 command_runner.go:130] > # gid_mappings = ""
	I0916 10:33:49.003663   38254 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:33:49.003669   38254 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:33:49.003674   38254 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:33:49.003681   38254 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:33:49.003686   38254 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:33:49.003695   38254 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:33:49.003701   38254 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:33:49.003707   38254 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:33:49.003713   38254 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:33:49.003719   38254 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:33:49.003725   38254 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:33:49.003730   38254 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:33:49.003737   38254 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:33:49.003746   38254 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:33:49.003751   38254 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:33:49.003758   38254 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:33:49.003762   38254 command_runner.go:130] > # drop_infra_ctr = true
	I0916 10:33:49.003770   38254 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:33:49.003775   38254 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:33:49.003786   38254 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:33:49.003793   38254 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:33:49.003799   38254 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:33:49.003804   38254 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:33:49.003810   38254 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:33:49.003818   38254 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:33:49.003824   38254 command_runner.go:130] > # pinns_path = ""
	I0916 10:33:49.003831   38254 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:33:49.003839   38254 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0916 10:33:49.003846   38254 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0916 10:33:49.003853   38254 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:33:49.003858   38254 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:33:49.003867   38254 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:33:49.003882   38254 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0916 10:33:49.003889   38254 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:33:49.003898   38254 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:33:49.003905   38254 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:33:49.003910   38254 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:33:49.003913   38254 command_runner.go:130] > # ]
	I0916 10:33:49.003919   38254 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:33:49.003928   38254 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:33:49.003935   38254 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0916 10:33:49.003941   38254 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0916 10:33:49.003945   38254 command_runner.go:130] > #
	I0916 10:33:49.003949   38254 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0916 10:33:49.003957   38254 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0916 10:33:49.003962   38254 command_runner.go:130] > #  runtime_type = "oci"
	I0916 10:33:49.003969   38254 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0916 10:33:49.003973   38254 command_runner.go:130] > #  privileged_without_host_devices = false
	I0916 10:33:49.003980   38254 command_runner.go:130] > #  allowed_annotations = []
	I0916 10:33:49.003983   38254 command_runner.go:130] > # Where:
	I0916 10:33:49.003988   38254 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0916 10:33:49.003997   38254 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0916 10:33:49.004003   38254 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:33:49.004012   38254 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:33:49.004016   38254 command_runner.go:130] > #   in $PATH.
	I0916 10:33:49.004022   38254 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0916 10:33:49.004029   38254 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:33:49.004034   38254 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0916 10:33:49.004040   38254 command_runner.go:130] > #   state.
	I0916 10:33:49.004046   38254 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:33:49.004054   38254 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:33:49.004061   38254 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:33:49.004068   38254 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:33:49.004075   38254 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:33:49.004084   38254 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:33:49.004089   38254 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:33:49.004097   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:33:49.004105   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:33:49.004112   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:33:49.004118   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:33:49.004127   38254 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:33:49.004134   38254 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:33:49.004142   38254 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:33:49.004149   38254 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0916 10:33:49.004156   38254 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:33:49.004160   38254 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:33:49.004165   38254 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0916 10:33:49.004170   38254 command_runner.go:130] > runtime_type = "oci"
	I0916 10:33:49.004175   38254 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:33:49.004181   38254 command_runner.go:130] > runtime_config_path = ""
	I0916 10:33:49.004185   38254 command_runner.go:130] > monitor_path = ""
	I0916 10:33:49.004189   38254 command_runner.go:130] > monitor_cgroup = ""
	I0916 10:33:49.004195   38254 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:33:49.004219   38254 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0916 10:33:49.004225   38254 command_runner.go:130] > # running containers
	I0916 10:33:49.004229   38254 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0916 10:33:49.004238   38254 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0916 10:33:49.004244   38254 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0916 10:33:49.004252   38254 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0916 10:33:49.004257   38254 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0916 10:33:49.004262   38254 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0916 10:33:49.004269   38254 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0916 10:33:49.004273   38254 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0916 10:33:49.004281   38254 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0916 10:33:49.004285   38254 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0916 10:33:49.004293   38254 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:33:49.004298   38254 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:33:49.004306   38254 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:33:49.004313   38254 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:33:49.004322   38254 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:33:49.004328   38254 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:33:49.004339   38254 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:33:49.004349   38254 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:33:49.004354   38254 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:33:49.004363   38254 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:33:49.004367   38254 command_runner.go:130] > # Example:
	I0916 10:33:49.004376   38254 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:33:49.004380   38254 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:33:49.004387   38254 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:33:49.004392   38254 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:33:49.004398   38254 command_runner.go:130] > # cpuset = 0
	I0916 10:33:49.004402   38254 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:33:49.004408   38254 command_runner.go:130] > # Where:
	I0916 10:33:49.004412   38254 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:33:49.004419   38254 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:33:49.004426   38254 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:33:49.004431   38254 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:33:49.004441   38254 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:33:49.004447   38254 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:33:49.004450   38254 command_runner.go:130] > # 
	I0916 10:33:49.004456   38254 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:33:49.004462   38254 command_runner.go:130] > #
	I0916 10:33:49.004468   38254 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:33:49.004476   38254 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:33:49.004482   38254 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:33:49.004491   38254 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:33:49.004497   38254 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:33:49.004503   38254 command_runner.go:130] > [crio.image]
	I0916 10:33:49.004508   38254 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:33:49.004515   38254 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:33:49.004521   38254 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:33:49.004529   38254 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:33:49.004534   38254 command_runner.go:130] > # global_auth_file = ""
	I0916 10:33:49.004541   38254 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:33:49.004546   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.004553   38254 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:33:49.004560   38254 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:33:49.004568   38254 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:33:49.004573   38254 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:33:49.004580   38254 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:33:49.004585   38254 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:33:49.004594   38254 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:33:49.004600   38254 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:33:49.004608   38254 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:33:49.004612   38254 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:33:49.004618   38254 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:33:49.004626   38254 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:33:49.004632   38254 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:33:49.004638   38254 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:33:49.004645   38254 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:33:49.004649   38254 command_runner.go:130] > # signature_policy = ""
	I0916 10:33:49.004660   38254 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:33:49.004666   38254 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:33:49.004671   38254 command_runner.go:130] > # changing them here.
	I0916 10:33:49.004675   38254 command_runner.go:130] > # insecure_registries = [
	I0916 10:33:49.004681   38254 command_runner.go:130] > # ]
	I0916 10:33:49.004687   38254 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:33:49.004693   38254 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:33:49.004697   38254 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:33:49.004705   38254 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:33:49.004709   38254 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:33:49.004715   38254 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:33:49.004720   38254 command_runner.go:130] > # CNI plugins.
	I0916 10:33:49.004723   38254 command_runner.go:130] > [crio.network]
	I0916 10:33:49.004731   38254 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:33:49.004737   38254 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:33:49.004743   38254 command_runner.go:130] > # cni_default_network = ""
	I0916 10:33:49.004748   38254 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:33:49.004754   38254 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:33:49.004760   38254 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:33:49.004766   38254 command_runner.go:130] > # plugin_dirs = [
	I0916 10:33:49.004769   38254 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:33:49.004773   38254 command_runner.go:130] > # ]
	I0916 10:33:49.004778   38254 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:33:49.004784   38254 command_runner.go:130] > [crio.metrics]
	I0916 10:33:49.004789   38254 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:33:49.004796   38254 command_runner.go:130] > # enable_metrics = false
	I0916 10:33:49.004801   38254 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:33:49.004808   38254 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:33:49.004814   38254 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:33:49.004820   38254 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:33:49.004826   38254 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:33:49.004832   38254 command_runner.go:130] > # metrics_collectors = [
	I0916 10:33:49.004835   38254 command_runner.go:130] > # 	"operations",
	I0916 10:33:49.004841   38254 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:33:49.004848   38254 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:33:49.004851   38254 command_runner.go:130] > # 	"operations_errors",
	I0916 10:33:49.004856   38254 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:33:49.004860   38254 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:33:49.004864   38254 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:33:49.004870   38254 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:33:49.004874   38254 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:33:49.004884   38254 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:33:49.004890   38254 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:33:49.004894   38254 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:33:49.004897   38254 command_runner.go:130] > # 	"containers_oom",
	I0916 10:33:49.004902   38254 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:33:49.004908   38254 command_runner.go:130] > # 	"operations_total",
	I0916 10:33:49.004912   38254 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:33:49.004917   38254 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:33:49.004921   38254 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:33:49.004927   38254 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:33:49.004931   38254 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:33:49.004937   38254 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:33:49.004941   38254 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:33:49.004945   38254 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:33:49.004949   38254 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:33:49.004952   38254 command_runner.go:130] > # ]
	I0916 10:33:49.004957   38254 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:33:49.004967   38254 command_runner.go:130] > # metrics_port = 9090
	I0916 10:33:49.004974   38254 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:33:49.004978   38254 command_runner.go:130] > # metrics_socket = ""
	I0916 10:33:49.004986   38254 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:33:49.004995   38254 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:33:49.005001   38254 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:33:49.005008   38254 command_runner.go:130] > # certificate on any modification event.
	I0916 10:33:49.005012   38254 command_runner.go:130] > # metrics_cert = ""
	I0916 10:33:49.005019   38254 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:33:49.005024   38254 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:33:49.005031   38254 command_runner.go:130] > # metrics_key = ""
	I0916 10:33:49.005036   38254 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:33:49.005042   38254 command_runner.go:130] > [crio.tracing]
	I0916 10:33:49.005048   38254 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:33:49.005054   38254 command_runner.go:130] > # enable_tracing = false
	I0916 10:33:49.005060   38254 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:33:49.005066   38254 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:33:49.005072   38254 command_runner.go:130] > # Number of samples to collect per million spans.
	I0916 10:33:49.005078   38254 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:33:49.005084   38254 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:33:49.005090   38254 command_runner.go:130] > [crio.stats]
	I0916 10:33:49.005095   38254 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:33:49.005103   38254 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:33:49.005107   38254 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:33:49.005165   38254 cni.go:84] Creating CNI manager for ""
	I0916 10:33:49.005174   38254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:33:49.005184   38254 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:33:49.005202   38254 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546931 NodeName:functional-546931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:33:49.005320   38254 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546931"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:33:49.005406   38254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:33:49.013742   38254 command_runner.go:130] > kubeadm
	I0916 10:33:49.013765   38254 command_runner.go:130] > kubectl
	I0916 10:33:49.013771   38254 command_runner.go:130] > kubelet
	I0916 10:33:49.013796   38254 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:33:49.013847   38254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:33:49.021757   38254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0916 10:33:49.038691   38254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:33:49.055067   38254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0916 10:33:49.071178   38254 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:33:49.074413   38254 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0916 10:33:49.074489   38254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:33:49.176315   38254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:33:49.186887   38254 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931 for IP: 192.168.49.2
	I0916 10:33:49.186909   38254 certs.go:194] generating shared ca certs ...
	I0916 10:33:49.186926   38254 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:33:49.187066   38254 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:33:49.187105   38254 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:33:49.187111   38254 certs.go:256] generating profile certs ...
	I0916 10:33:49.187181   38254 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key
	I0916 10:33:49.187236   38254 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key.94db7109
	I0916 10:33:49.187275   38254 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key
	I0916 10:33:49.187283   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:33:49.187294   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:33:49.187304   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:33:49.187316   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:33:49.187329   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:33:49.187342   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:33:49.187356   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:33:49.187368   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:33:49.187416   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:33:49.187443   38254 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:33:49.187452   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:33:49.187475   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:33:49.187496   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:33:49.187517   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:33:49.187556   38254 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:33:49.187579   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.187589   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.187599   38254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.188132   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:33:49.210555   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:33:49.232164   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:33:49.253429   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:33:49.274719   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:33:49.295960   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:33:49.317488   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:33:49.338688   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:33:49.360466   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:33:49.382811   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:33:49.405854   38254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:33:49.427060   38254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:33:49.443019   38254 ssh_runner.go:195] Run: openssl version
	I0916 10:33:49.447634   38254 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:33:49.447868   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:33:49.456226   38254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.459381   38254 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.459405   38254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.459438   38254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:33:49.465459   38254 command_runner.go:130] > 3ec20f2e
	I0916 10:33:49.465663   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:33:49.473825   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:33:49.482264   38254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.485248   38254 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.485278   38254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.485320   38254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:33:49.491217   38254 command_runner.go:130] > b5213941
	I0916 10:33:49.491418   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:33:49.499104   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:33:49.507482   38254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.510649   38254 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.510706   38254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.510753   38254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:33:49.516916   38254 command_runner.go:130] > 51391683
	I0916 10:33:49.517148   38254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:33:49.525079   38254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:33:49.528120   38254 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:33:49.528141   38254 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:33:49.528150   38254 command_runner.go:130] > Device: 801h/2049d	Inode: 845407      Links: 1
	I0916 10:33:49.528159   38254 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:33:49.528168   38254 command_runner.go:130] > Access: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528175   38254 command_runner.go:130] > Modify: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528185   38254 command_runner.go:130] > Change: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528197   38254 command_runner.go:130] >  Birth: 2024-09-16 10:33:12.661786417 +0000
	I0916 10:33:49.528251   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:33:49.534274   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.534327   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:33:49.540413   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.540482   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:33:49.546205   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.546462   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:33:49.552870   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.552926   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:33:49.559026   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.559247   38254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:33:49.565244   38254 command_runner.go:130] > Certificate will not expire
	I0916 10:33:49.565437   38254 kubeadm.go:392] StartCluster: {Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:33:49.565522   38254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:33:49.565578   38254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:33:49.596726   38254 command_runner.go:130] > 046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b
	I0916 10:33:49.596751   38254 command_runner.go:130] > 3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0
	I0916 10:33:49.596760   38254 command_runner.go:130] > fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d
	I0916 10:33:49.596771   38254 command_runner.go:130] > af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02
	I0916 10:33:49.596780   38254 command_runner.go:130] > 162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02
	I0916 10:33:49.596789   38254 command_runner.go:130] > f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb
	I0916 10:33:49.596798   38254 command_runner.go:130] > 75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534
	I0916 10:33:49.596812   38254 command_runner.go:130] > 9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81
	I0916 10:33:49.598752   38254 cri.go:89] found id: "046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b"
	I0916 10:33:49.598773   38254 cri.go:89] found id: "3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0"
	I0916 10:33:49.598779   38254 cri.go:89] found id: "fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d"
	I0916 10:33:49.598784   38254 cri.go:89] found id: "af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02"
	I0916 10:33:49.598787   38254 cri.go:89] found id: "162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02"
	I0916 10:33:49.598791   38254 cri.go:89] found id: "f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb"
	I0916 10:33:49.598793   38254 cri.go:89] found id: "75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534"
	I0916 10:33:49.598796   38254 cri.go:89] found id: "9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81"
	I0916 10:33:49.598803   38254 cri.go:89] found id: ""
	I0916 10:33:49.598853   38254 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 10:33:49.617772   38254 command_runner.go:130] > [{"ociVersion":"1.0.2-dev","id":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b/userdata","rootfs":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","created":"2024-09-16T10:33:38.617357367Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernete
s.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.592037792Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","i
o.kubernetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-wjzzx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-wjzzx_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a8423288f91be1a84a4da521d6ae34bd864cd162a94fbed9d42a73771704123e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a8423288f91be1a84a4da521d6ae34bd
864cd162a94fbed9d42a73771704123e","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/containers/coredns/bf4a0824\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccou
nt\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~projected/kube-api-access-6nbq8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-wjzzx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","kubernetes.io/config.seen":"2024-09-16T10:33:38.232398573Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02/userdata","rootfs":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","created":"2024-09-16T10:33:16.913425992Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79"
,"io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.872185655Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f
3135b30aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c02f70efafdd9ad1683640c8d3761d1d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-546931_c02f70efafdd9ad1683640c8d3761d1d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/878410a4a3694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"878410a4a3
694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/containers/kube-controller-manager/40c5a971\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propag
ation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-546931","io.kubernetes.pod.namespace":"kube-sy
stem","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.hash":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.seen":"2024-09-16T10:33:16.360793733Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0/userdata","rootfs":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","created":"2024-09-16T10:33:38.599775677Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.
kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.574503795Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7e94614-5
67e-47ba-a51a-426f09198dba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a7e94614-567e-47ba-a51a-426f09198dba/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TT
Y":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/containers/storage-provisioner/a6e61f0b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/volumes/kubernetes.io~projected/kube-api-access-2sn2d\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7e94614-567e-47ba-a51a
-426f09198dba","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-09-16T10:33:38.233440095Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75
f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534/userdata","rootfs":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","created":"2024-09-16T10:33:16.898797659Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","io.kubernetes.cri-o.ContainerType":"container","io.kubern
etes.cri-o.Created":"2024-09-16T10:33:16.857413084Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"adb8a765a0d6f587897c42f69e87ac66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546931_adb8a765a0d6f587897c42f69e87ac66/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-546931_kube-system_ad
b8a765a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546931_kube-system_adb8a765a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/containers/kube-scheduler/744b9614\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"
/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:16.360795477Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81/userdata","rootfs":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","created":"2024-09-16T10:33:16.900739801Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.ha
sh":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.85647171Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aae
a29d1aee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eb02afa85fe4b42d87b2f90fa03a9ee4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-546931_eb02afa85fe4b42d87b2f90fa03a9ee4/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a","io.kubernet
es.cri-o.SandboxName":"k8s_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/containers/kube-apiserver/66d438ec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/et
c/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.seen":"2024-09-16T10:33:16.360791837Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02","pid":0,"status
":"stopped","bundle":"/run/containers/storage/overlay-containers/af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02/userdata","rootfs":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","created":"2024-09-16T10:33:27.512878128Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e80daca3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e80daca3\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368
ed02","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.418309123Z","io.kubernetes.cri-o.Image":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri-o.ImageRef":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-6dtx8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"44bb424a-c279-467b-9256-64be125798f9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-6dtx8_44bb424a-c279-467b-9256-64be125798f9/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-6dtx8_kube
-system_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-6dtx8_kube-system_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/etc-hosts\",\"readonly\":false,\"propag
ation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/containers/kindnet-cni/72735cde\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/volumes/kubernetes.io~projected/kube-api-access-pvmbd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-6dtx8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"44bb424a-c279-467b-9256-64be125798f9","kubernetes.io/config.seen":"2024-09-16T10:33:27.017005789Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f2b587ead9ac67a13360a9d4e
64d8162b8e8a689647afbe35780436d360a37eb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb/userdata","rootfs":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","created":"2024-09-16T10:33:16.907949976Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f2b587ead9a
c67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.862227247Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4f74e884ad630d68b59e0dbdb6055584\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546931_4f74e884ad630d68b59e0dbdb6055584/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etc
d-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/containers/etcd/233a07f1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"conta
iner_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4f74e884ad630d68b59e0dbdb6055584","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.seen":"2024-09-16T10:33:16.360785708Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d/userdata","ro
otfs":"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","created":"2024-09-16T10:33:27.53460221Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.498124321Z","io.kubernetes.cri-o.Image":
"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-kshs9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-kshs9_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f14f9778290afbd7383f2dd12e
e1f50b74d62f40bf11ae42d2fd8c4a441931e1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f14f9778290afbd7383f2dd12ee1f50b74d62f40bf11ae42d2fd8c4a441931e1","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e0
19b2687b/containers/kube-proxy/1af07bf5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~projected/kube-api-access-j6b95\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-kshs9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b","kubernetes.io/config.seen":"2024-09-16T10:33:27.024180818Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0916 10:33:49.617849   38254 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b/userdata","rootfs":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","created":"2024-09-16T10:33:38.617357367Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-
o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.592037792Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","io.kube
rnetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-wjzzx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-wjzzx_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/910a0c2bc01315fa3a464fded4f710b1057d34c7d7b2857e18a8de16957c048f/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a8423288f91be1a84a4da521d6ae34bd864cd162a94fbed9d42a73771704123e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a8423288f91be1a84a4da521d6ae34bd864cd1
62a94fbed9d42a73771704123e","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/containers/coredns/bf4a0824\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\
"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~projected/kube-api-access-6nbq8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-wjzzx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","kubernetes.io/config.seen":"2024-09-16T10:33:38.232398573Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02/userdata","rootfs":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","created":"2024-09-16T10:33:16.913425992Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79","io.k
ubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.872185655Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b3
0aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c02f70efafdd9ad1683640c8d3761d1d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-546931_c02f70efafdd9ad1683640c8d3761d1d/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/be9a1f372203e2b026d3db2eea6468eaad749813495a4be6dfe5a66b16b6ed84/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/878410a4a3694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"878410a4a3694fdf
2132194e1285396dab571b39a68ea3dbdc0049350911800d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/containers/kube-controller-manager/40c5a971\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-546931","io.kubernetes.pod.namespace":"kube-system",
"io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.hash":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.seen":"2024-09-16T10:33:16.360793733Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0/userdata","rootfs":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","created":"2024-09-16T10:33:38.599775677Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubern
etes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:38.574503795Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7e94614-567e-47
ba-a51a-426f09198dba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_a7e94614-567e-47ba-a51a-426f09198dba/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/169734699d8a29a2148c6c48e972446d8f5032095b5bbb73973aadc1d219e93f/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"fa
lse","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/containers/storage-provisioner/a6e61f0b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/volumes/kubernetes.io~projected/kube-api-access-2sn2d\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7e94614-567e-47ba-a51a-426f0
9198dba","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-09-16T10:33:38.233440095Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75f3c106
06812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534/userdata","rootfs":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","created":"2024-09-16T10:33:16.898797659Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.c
ri-o.Created":"2024-09-16T10:33:16.857413084Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"adb8a765a0d6f587897c42f69e87ac66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546931_adb8a765a0d6f587897c42f69e87ac66/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dac67d85252bf13f96a4320e1745721f70226b779a99caee53b0d5c2058e61f0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-546931_kube-system_adb8a765
a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546931_kube-system_adb8a765a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/containers/kube-scheduler/744b9614\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/k
ubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:16.360795477Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81/userdata","rootfs":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","created":"2024-09-16T10:33:16.900739801Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7
df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.85647171Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1a
ee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eb02afa85fe4b42d87b2f90fa03a9ee4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-546931_eb02afa85fe4b42d87b2f90fa03a9ee4/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3f2d5b81adda588bd3e05ccee93b9df3daf72aec973afcb7e5fae676c4a7ffff/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a","io.kubernetes.cri
-o.SandboxName":"k8s_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/containers/kube-apiserver/66d438ec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/
certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.seen":"2024-09-16T10:33:16.360791837Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02","pid":0,"status":"sto
pped","bundle":"/run/containers/storage/overlay-containers/af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02/userdata","rootfs":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","created":"2024-09-16T10:33:27.512878128Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e80daca3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e80daca3\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02",
"io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.418309123Z","io.kubernetes.cri-o.Image":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri-o.ImageRef":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-6dtx8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"44bb424a-c279-467b-9256-64be125798f9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-6dtx8_44bb424a-c279-467b-9256-64be125798f9/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/44dc8cdc891e682f4096ed10197d68a070a7151c57c8d6675a213e2401d90332/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-6dtx8_kube-syste
m_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-6dtx8_kube-system_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/etc-hosts\",\"readonly\":false,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/containers/kindnet-cni/72735cde\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/volumes/kubernetes.io~projected/kube-api-access-pvmbd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-6dtx8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"44bb424a-c279-467b-9256-64be125798f9","kubernetes.io/config.seen":"2024-09-16T10:33:27.017005789Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f2b587ead9ac67a13360a9d4e64d816
2b8e8a689647afbe35780436d360a37eb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb/userdata","rootfs":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","created":"2024-09-16T10:33:16.907949976Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f2b587ead9ac67a13
360a9d4e64d8162b8e8a689647afbe35780436d360a37eb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:16.862227247Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4f74e884ad630d68b59e0dbdb6055584\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546931_4f74e884ad630d68b59e0dbdb6055584/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/20cb6bba16fec712839eac07b5ce765faf2741ea000908ea8ac56a835d2fff6d/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-func
tional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/containers/etcd/233a07f1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_p
ath\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4f74e884ad630d68b59e0dbdb6055584","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.seen":"2024-09-16T10:33:16.360785708Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d/userdata","rootfs":
"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","created":"2024-09-16T10:33:27.53460221Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:27.498124321Z","io.kubernetes.cri-o.Image":"60c00
5f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-kshs9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-kshs9_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8005b4d90fbc1deaa0ddf38b3f6a0bc43e976e1a4a9f8fc787d1125d0d07fb03/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f14f9778290afbd7383f2dd12ee1f50b
74d62f40bf11ae42d2fd8c4a441931e1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f14f9778290afbd7383f2dd12ee1f50b74d62f40bf11ae42d2fd8c4a441931e1","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b268
7b/containers/kube-proxy/1af07bf5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~projected/kube-api-access-j6b95\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-kshs9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b","kubernetes.io/config.seen":"2024-09-16T10:33:27.024180818Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0916 10:33:49.618206   38254 cri.go:126] list returned 8 containers
	I0916 10:33:49.618216   38254 cri.go:129] container: {ID:046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b Status:stopped}
	I0916 10:33:49.618229   38254 cri.go:135] skipping {046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618239   38254 cri.go:129] container: {ID:162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02 Status:stopped}
	I0916 10:33:49.618244   38254 cri.go:135] skipping {162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618248   38254 cri.go:129] container: {ID:3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0 Status:stopped}
	I0916 10:33:49.618253   38254 cri.go:135] skipping {3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618256   38254 cri.go:129] container: {ID:75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534 Status:stopped}
	I0916 10:33:49.618260   38254 cri.go:135] skipping {75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618265   38254 cri.go:129] container: {ID:9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81 Status:stopped}
	I0916 10:33:49.618269   38254 cri.go:135] skipping {9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618272   38254 cri.go:129] container: {ID:af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02 Status:stopped}
	I0916 10:33:49.618275   38254 cri.go:135] skipping {af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02 stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618281   38254 cri.go:129] container: {ID:f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb Status:stopped}
	I0916 10:33:49.618284   38254 cri.go:135] skipping {f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618290   38254 cri.go:129] container: {ID:fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d Status:stopped}
	I0916 10:33:49.618293   38254 cri.go:135] skipping {fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d stopped}: state = "stopped", want "paused"
	I0916 10:33:49.618334   38254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:33:49.625627   38254 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 10:33:49.625651   38254 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 10:33:49.625657   38254 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 10:33:49.625660   38254 command_runner.go:130] > member
	I0916 10:33:49.626315   38254 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:33:49.626332   38254 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:33:49.626386   38254 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:33:49.633963   38254 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:33:49.634429   38254 kubeconfig.go:125] found "functional-546931" server: "https://192.168.49.2:8441"
	I0916 10:33:49.634791   38254 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:49.634995   38254 kapi.go:59] client config for functional-546931: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:33:49.635364   38254 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:33:49.635523   38254 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:33:49.643129   38254 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:33:49.643158   38254 kubeadm.go:597] duration metric: took 16.81941ms to restartPrimaryControlPlane
	I0916 10:33:49.643169   38254 kubeadm.go:394] duration metric: took 77.739557ms to StartCluster
	I0916 10:33:49.643190   38254 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:33:49.643256   38254 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:49.643780   38254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:33:49.643985   38254 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:33:49.644050   38254 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:33:49.644203   38254 addons.go:69] Setting storage-provisioner=true in profile "functional-546931"
	I0916 10:33:49.644226   38254 addons.go:234] Setting addon storage-provisioner=true in "functional-546931"
	W0916 10:33:49.644235   38254 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:33:49.644181   38254 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:33:49.644268   38254 host.go:66] Checking if "functional-546931" exists ...
	I0916 10:33:49.644278   38254 addons.go:69] Setting default-storageclass=true in profile "functional-546931"
	I0916 10:33:49.644298   38254 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-546931"
	I0916 10:33:49.644589   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:49.644653   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:49.646651   38254 out.go:177] * Verifying Kubernetes components...
	I0916 10:33:49.648003   38254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:33:49.663793   38254 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:33:49.664132   38254 kapi.go:59] client config for functional-546931: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:33:49.664453   38254 addons.go:234] Setting addon default-storageclass=true in "functional-546931"
	W0916 10:33:49.664470   38254 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:33:49.664493   38254 host.go:66] Checking if "functional-546931" exists ...
	I0916 10:33:49.664783   38254 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:33:49.664937   38254 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:33:49.666385   38254 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:49.666402   38254 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:33:49.666441   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:49.682108   38254 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:49.682134   38254 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:33:49.682192   38254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:33:49.692860   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:49.705787   38254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:33:49.762902   38254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:33:49.773430   38254 node_ready.go:35] waiting up to 6m0s for node "functional-546931" to be "Ready" ...
	I0916 10:33:49.773561   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:49.773571   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:49.773582   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:49.773588   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:49.773815   38254 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:33:49.773834   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:49.802716   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:49.814043   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:49.857384   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:49.860509   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:49.860540   38254 retry.go:31] will retry after 300.245829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:49.869914   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:49.872729   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:49.872762   38254 retry.go:31] will retry after 238.748719ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.112285   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:50.161885   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:50.171454   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:50.177236   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.177268   38254 retry.go:31] will retry after 529.480717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.274595   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:50.274626   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:50.274638   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:50.274644   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:50.274973   38254 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:33:50.274992   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:50.315059   38254 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:33:50.317990   38254 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.318021   38254 retry.go:31] will retry after 305.983384ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:33:50.624430   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:33:50.707033   38254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:33:50.774228   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:50.774255   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:50.774263   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:50.774269   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:50.774569   38254 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:33:50.774585   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:51.274368   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:51.274392   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:51.274399   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:51.274405   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.038248   38254 round_trippers.go:574] Response Status: 200 OK in 1763 milliseconds
	I0916 10:33:53.038275   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.038284   38254 round_trippers.go:580]     Audit-Id: 1c642505-dccc-43a1-8ea3-320a97466b10
	I0916 10:33:53.038289   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.038294   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.038297   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:33:53.038301   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:33:53.038306   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.038412   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.039318   38254 node_ready.go:49] node "functional-546931" has status "Ready":"True"
	I0916 10:33:53.039341   38254 node_ready.go:38] duration metric: took 3.265875226s for node "functional-546931" to be "Ready" ...
	I0916 10:33:53.039354   38254 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:33:53.039406   38254 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:33:53.039420   38254 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:33:53.039489   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:33:53.039497   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.039507   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.039513   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.100254   38254 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0916 10:33:53.100282   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.100293   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:33:53.100299   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:33:53.100305   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.100309   38254 round_trippers.go:580]     Audit-Id: ae4b4bee-0fe7-4f86-9096-659df06d797e
	I0916 10:33:53.100316   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.100321   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.101912   38254 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"437","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59464 chars]
	I0916 10:33:53.107344   38254 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.107473   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-wjzzx
	I0916 10:33:53.107486   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.107498   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.107507   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.197526   38254 round_trippers.go:574] Response Status: 200 OK in 89 milliseconds
	I0916 10:33:53.197557   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.197567   38254 round_trippers.go:580]     Audit-Id: 651694ce-a38c-452e-9b01-11e9d57c8932
	I0916 10:33:53.197573   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.197577   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.197581   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:33:53.197587   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:33:53.197592   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.197753   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"437","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6814 chars]
	I0916 10:33:53.198405   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.198429   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.198440   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.198449   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.203490   38254 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:33:53.203517   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.203527   38254 round_trippers.go:580]     Audit-Id: 7b802d09-8564-4668-abc2-0b4162246b03
	I0916 10:33:53.203535   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.203542   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.203545   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.203549   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.203568   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.203703   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.204288   38254 pod_ready.go:93] pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.204341   38254 pod_ready.go:82] duration metric: took 96.956266ms for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.204382   38254 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.204515   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-546931
	I0916 10:33:53.204546   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.204584   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.204596   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.208347   38254 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:33:53.208421   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.208436   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.208440   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.208445   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.208450   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.208453   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.208458   38254 round_trippers.go:580]     Audit-Id: 3d330fd4-a8c9-4e1d-af79-01d37292c22a
	I0916 10:33:53.208704   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-546931","namespace":"kube-system","uid":"7fe96e5a-6112-4e96-981b-b15be906fa34","resourceVersion":"408","creationTimestamp":"2024-09-16T10:33:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.mirror":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.seen":"2024-09-16T10:33:16.360785708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6440 chars]
	I0916 10:33:53.209277   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.209312   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.209326   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.209348   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.214187   38254 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:33:53.214217   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.214225   38254 round_trippers.go:580]     Audit-Id: ace7b2e4-42fa-44aa-a282-895c07bcbc84
	I0916 10:33:53.214231   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.214235   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.214239   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.214245   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.214250   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.214765   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.215209   38254 pod_ready.go:93] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.215239   38254 pod_ready.go:82] duration metric: took 10.839142ms for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.215259   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.215351   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-546931
	I0916 10:33:53.215365   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.215378   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.215392   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.294408   38254 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I0916 10:33:53.294434   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.294443   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.294450   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.294453   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.294457   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.294461   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.294466   38254 round_trippers.go:580]     Audit-Id: 03574138-3282-4aa3-aa83-814665603454
	I0916 10:33:53.294680   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-546931","namespace":"kube-system","uid":"19d3920d-b342-4764-b722-116797db07ca","resourceVersion":"414","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.mirror":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.seen":"2024-09-16T10:33:22.023551772Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8516 chars]
	I0916 10:33:53.295290   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.295318   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.295328   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.295334   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.303774   38254 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:33:53.303799   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.303850   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.303856   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.303862   38254 round_trippers.go:580]     Audit-Id: 96ad4e16-5211-4b10-90c9-83d766b93e24
	I0916 10:33:53.303866   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.303870   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.303874   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.304059   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.304531   38254 pod_ready.go:93] pod "kube-apiserver-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.304564   38254 pod_ready.go:82] duration metric: took 89.294956ms for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.304601   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.304715   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-546931
	I0916 10:33:53.304736   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.304762   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.304776   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.307514   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.307538   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.307548   38254 round_trippers.go:580]     Audit-Id: cda7b98e-3acf-4b93-aa0a-6aa95829a2e4
	I0916 10:33:53.307554   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.307561   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.307565   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.307571   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.307575   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.307703   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-546931","namespace":"kube-system","uid":"49789d64-6fd1-441c-b9e0-470a0832d127","resourceVersion":"416","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.mirror":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.seen":"2024-09-16T10:33:22.023553611Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8091 chars]
	I0916 10:33:53.308331   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.308349   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.308360   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.308366   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.311139   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.311160   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.311172   38254 round_trippers.go:580]     Audit-Id: d752729e-0858-4d4e-9528-9d2d7e158372
	I0916 10:33:53.311178   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.311184   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.311189   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.311192   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.311197   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.311355   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.311763   38254 pod_ready.go:93] pod "kube-controller-manager-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.311785   38254 pod_ready.go:82] duration metric: took 7.161521ms for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.311796   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.311855   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kshs9
	I0916 10:33:53.311859   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.311866   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.311919   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.314064   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.314083   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.314092   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.314096   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.314102   38254 round_trippers.go:580]     Audit-Id: 9ef53038-6005-430b-a333-55401be5c3b3
	I0916 10:33:53.314105   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.314110   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.314113   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.314244   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kshs9","generateName":"kube-proxy-","namespace":"kube-system","uid":"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b","resourceVersion":"402","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86c1ab56-d49f-4f2c-8253-0494b746de56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86c1ab56-d49f-4f2c-8253-0494b746de56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6172 chars]
	I0916 10:33:53.314768   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.314787   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.314798   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.314807   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.318553   38254 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:33:53.318600   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.318622   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.318632   38254 round_trippers.go:580]     Audit-Id: 304083ef-0660-4c80-b607-7b0d2afbeabc
	I0916 10:33:53.318637   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.318641   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.318645   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.318650   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.319108   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.319589   38254 pod_ready.go:93] pod "kube-proxy-kshs9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:33:53.319619   38254 pod_ready.go:82] duration metric: took 7.815518ms for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.319632   38254 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:33:53.439973   38254 request.go:632] Waited for 120.260508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:53.440058   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:53.440099   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.440120   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.440136   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.442326   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.442344   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.442353   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.442358   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.442363   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.442367   38254 round_trippers.go:580]     Audit-Id: ed8cfd9f-3256-4ec1-a34f-874048e03f2d
	I0916 10:33:53.442371   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.442375   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.442513   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:53.640393   38254 request.go:632] Waited for 197.36129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.640448   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:53.640453   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.640459   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.640463   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.642389   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:53.642414   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.642424   38254 round_trippers.go:580]     Audit-Id: 0757a158-ca2f-47f0-ba87-6a82d0a5c7e6
	I0916 10:33:53.642429   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.642433   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.642438   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.642442   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.642448   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.642584   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:53.840548   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:53.840577   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.840599   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.840608   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.843027   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.843056   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.843067   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.843073   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.843077   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.843082   38254 round_trippers.go:580]     Audit-Id: 27427865-937d-4133-830d-1adc18e56eda
	I0916 10:33:53.843087   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.843090   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.843732   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:53.955357   38254 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0916 10:33:53.955390   38254 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0916 10:33:53.955402   38254 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:33:53.955415   38254 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:33:53.955423   38254 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0916 10:33:53.955435   38254 command_runner.go:130] > pod/storage-provisioner configured
	I0916 10:33:53.955464   38254 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.331007744s)
	I0916 10:33:53.955500   38254 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0916 10:33:53.955548   38254 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.248481765s)
	I0916 10:33:53.955697   38254 round_trippers.go:463] GET https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses
	I0916 10:33:53.955710   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.955720   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.955726   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.958346   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:53.958370   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.958378   38254 round_trippers.go:580]     Audit-Id: c6bea1e7-9038-41e9-be20-1f68f1bcf84c
	I0916 10:33:53.958381   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.958385   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.958388   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.958391   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.958395   38254 round_trippers.go:580]     Content-Length: 1273
	I0916 10:33:53.958397   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.958427   38254 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"463"},"items":[{"metadata":{"name":"standard","uid":"7dc87164-1259-473b-bcbc-5a709a2c0af0","resourceVersion":"377","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:33:53.958853   38254 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"7dc87164-1259-473b-bcbc-5a709a2c0af0","resourceVersion":"377","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:33:53.958910   38254 round_trippers.go:463] PUT https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:33:53.958917   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:53.958924   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:53.958929   38254 round_trippers.go:473]     Content-Type: application/json
	I0916 10:33:53.958935   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:53.962008   38254 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:33:53.962029   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:53.962036   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:53 GMT
	I0916 10:33:53.962041   38254 round_trippers.go:580]     Audit-Id: 85cec3d1-2992-4d80-ae3a-330f67f88a6b
	I0916 10:33:53.962044   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:53.962054   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:53.962058   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:53.962060   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:53.962064   38254 round_trippers.go:580]     Content-Length: 1220
	I0916 10:33:53.962109   38254 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"7dc87164-1259-473b-bcbc-5a709a2c0af0","resourceVersion":"377","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:33:53.965209   38254 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:33:53.966605   38254 addons.go:510] duration metric: took 4.322556117s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:33:54.040278   38254 request.go:632] Waited for 195.856487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.040342   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.040353   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.040364   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.040371   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.042055   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:54.042077   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.042086   38254 round_trippers.go:580]     Audit-Id: c5198a6f-5bb8-4a26-b48c-26bca8116a3e
	I0916 10:33:54.042091   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.042097   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.042101   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.042105   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.042109   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.042225   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:54.320703   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:54.320728   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.320736   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.320741   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.322904   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.322928   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.322936   38254 round_trippers.go:580]     Audit-Id: 5de2991c-0858-4a3e-9a47-e142f64addac
	I0916 10:33:54.322942   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.322946   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.322950   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.322954   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.322958   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.323122   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:54.439811   38254 request.go:632] Waited for 116.306704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.439892   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.439897   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.439907   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.439911   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.441950   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.441968   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.441975   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.441979   38254 round_trippers.go:580]     Audit-Id: a2764685-a036-4033-b994-bbe592950d2d
	I0916 10:33:54.441983   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.441987   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.441990   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.441993   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.442185   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:54.820734   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:54.820760   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.820769   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.820774   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.823033   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.823059   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.823068   38254 round_trippers.go:580]     Audit-Id: f97efdb6-d58f-4112-a0d4-badbca5fc43f
	I0916 10:33:54.823075   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.823080   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.823085   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.823091   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.823095   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.823237   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:54.839893   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:54.839942   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:54.839954   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:54.839960   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:54.842167   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:54.842193   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:54.842200   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:54.842205   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:54.842207   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:54 GMT
	I0916 10:33:54.842210   38254 round_trippers.go:580]     Audit-Id: ad1e3030-3d88-4bcf-82c6-c28d45be3788
	I0916 10:33:54.842212   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:54.842216   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:54.842429   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:55.320044   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:55.320068   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.320076   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.320081   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.322332   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:55.322356   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.322366   38254 round_trippers.go:580]     Audit-Id: b1f7491a-b7ff-4038-ad39-51ecaa970ccc
	I0916 10:33:55.322370   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.322372   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.322376   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.322380   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.322383   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.322551   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:55.322974   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:55.322990   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.323002   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.323008   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.324737   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:55.324751   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.324758   38254 round_trippers.go:580]     Audit-Id: b9c5e412-285c-43f1-bd9e-49afd075300e
	I0916 10:33:55.324765   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.324771   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.324775   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.324780   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.324792   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.324970   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:55.325280   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:33:55.820788   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:55.820811   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.820819   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.820822   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.823202   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:55.823223   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.823230   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.823234   38254 round_trippers.go:580]     Audit-Id: 93ea65b0-3b2d-45f0-8d9f-7fd6d377658f
	I0916 10:33:55.823238   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.823242   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.823245   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.823247   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.823463   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:55.823905   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:55.823922   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:55.823932   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:55.823937   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:55.825728   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:55.825742   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:55.825753   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:55 GMT
	I0916 10:33:55.825756   38254 round_trippers.go:580]     Audit-Id: a9596991-ca14-4d5d-badc-32c5aa86bc01
	I0916 10:33:55.825760   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:55.825762   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:55.825765   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:55.825768   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:55.825951   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:56.320626   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:56.320662   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.320673   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.320677   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.322944   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:56.322961   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.322968   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.322972   38254 round_trippers.go:580]     Audit-Id: c2f282b4-169c-48e5-b6d4-5177c73ef827
	I0916 10:33:56.322975   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.322978   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.322980   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.322983   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.323162   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:56.323542   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:56.323554   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.323561   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.323564   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.325237   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:56.325256   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.325266   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.325276   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.325281   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.325286   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.325294   38254 round_trippers.go:580]     Audit-Id: 3c2420df-2485-44c9-8205-15e3f735028c
	I0916 10:33:56.325298   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.325465   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:56.820054   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:56.820080   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.820087   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.820091   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.822399   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:56.822423   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.822433   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.822437   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.822443   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.822449   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.822453   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.822459   38254 round_trippers.go:580]     Audit-Id: fa6c8bc1-f577-440c-8488-5aec0e46477f
	I0916 10:33:56.822567   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:56.822974   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:56.822988   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:56.822995   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:56.823000   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:56.824585   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:56.824603   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:56.824612   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:56 GMT
	I0916 10:33:56.824620   38254 round_trippers.go:580]     Audit-Id: 007c0986-fb46-469e-896a-3f9e05879f5c
	I0916 10:33:56.824628   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:56.824632   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:56.824638   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:56.824645   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:56.824792   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:57.320450   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:57.320478   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.320486   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.320489   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.322520   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:57.322541   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.322547   38254 round_trippers.go:580]     Audit-Id: ee77ef11-fb1f-4703-afa5-559d02e420ba
	I0916 10:33:57.322551   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.322554   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.322558   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.322564   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.322568   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.322748   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:57.323134   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:57.323148   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.323155   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.323158   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.324904   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:57.324921   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.324929   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.324934   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.324939   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.324943   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.324947   38254 round_trippers.go:580]     Audit-Id: bf45afbc-2dde-45ac-83b6-34cd3e87137a
	I0916 10:33:57.324951   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.325111   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:57.325474   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:33:57.820815   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:57.820836   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.820844   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.820847   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.823010   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:57.823033   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.823043   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.823054   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.823058   38254 round_trippers.go:580]     Audit-Id: 8b9870de-9f9d-4f26-a6dd-3c15a4e1cd62
	I0916 10:33:57.823062   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.823066   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.823072   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.823231   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:57.823655   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:57.823671   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:57.823681   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:57.823689   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:57.825433   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:57.825471   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:57.825482   38254 round_trippers.go:580]     Audit-Id: b556fffd-a085-434b-8294-e3e7380d7f2e
	I0916 10:33:57.825490   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:57.825497   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:57.825503   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:57.825513   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:57.825519   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:57 GMT
	I0916 10:33:57.825681   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:58.320335   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:58.320363   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.320371   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.320376   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.322583   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:58.322602   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.322608   38254 round_trippers.go:580]     Audit-Id: 44b346cb-cc00-492c-b32a-a62119404892
	I0916 10:33:58.322613   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.322617   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.322620   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.322623   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.322626   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.322753   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:58.323128   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:58.323141   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.323147   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.323152   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.324775   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:58.324790   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.324796   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.324800   38254 round_trippers.go:580]     Audit-Id: ecc5a8f8-b95d-41df-a751-9244ce0fee39
	I0916 10:33:58.324803   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.324806   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.324809   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.324812   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.324945   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:58.820734   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:58.820761   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.820770   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.820778   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.823145   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:58.823167   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.823175   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.823180   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.823183   38254 round_trippers.go:580]     Audit-Id: a2364f0a-ebbb-497c-b763-1afadf9035e2
	I0916 10:33:58.823188   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.823190   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.823193   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.823316   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:58.823716   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:58.823730   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:58.823736   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:58.823740   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:58.825726   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:58.825754   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:58.825762   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:58.825768   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:58.825778   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:58 GMT
	I0916 10:33:58.825784   38254 round_trippers.go:580]     Audit-Id: 2d702b31-9359-4fa6-9411-2fd783e8dd5e
	I0916 10:33:58.825789   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:58.825794   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:58.825932   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:59.320581   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:59.320607   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.320616   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.320621   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.322982   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:59.323010   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.323021   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.323026   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.323031   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.323034   38254 round_trippers.go:580]     Audit-Id: f3054f05-d6c1-4d2f-a614-b395364e5bb8
	I0916 10:33:59.323039   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.323043   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.323164   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:59.323579   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:59.323596   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.323603   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.323607   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.325421   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:59.325453   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.325464   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.325470   38254 round_trippers.go:580]     Audit-Id: 2fd4d607-a47e-41b7-8a45-7001a5e948e4
	I0916 10:33:59.325476   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.325481   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.325489   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.325494   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.325648   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:33:59.325944   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:33:59.820645   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:33:59.820674   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.820686   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.820692   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.823119   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:33:59.823146   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.823154   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.823161   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.823165   38254 round_trippers.go:580]     Audit-Id: d15a0dd8-62a2-42c6-baf2-901ac12065e8
	I0916 10:33:59.823171   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.823176   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.823180   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.823287   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:33:59.823773   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:33:59.823789   38254 round_trippers.go:469] Request Headers:
	I0916 10:33:59.823800   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:33:59.823806   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:33:59.825642   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:33:59.825664   38254 round_trippers.go:577] Response Headers:
	I0916 10:33:59.825674   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:33:59.825678   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:33:59.825685   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:33:59 GMT
	I0916 10:33:59.825689   38254 round_trippers.go:580]     Audit-Id: 60a43eac-08d9-4e91-99b1-541773d1eca7
	I0916 10:33:59.825693   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:33:59.825698   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:33:59.825874   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:00.320369   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:00.320398   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.320408   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.320413   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.322249   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:00.322270   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.322279   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.322285   38254 round_trippers.go:580]     Audit-Id: 666ab14f-79c0-4af6-afb7-c2f52d3e5ecd
	I0916 10:34:00.322290   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.322296   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.322301   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.322305   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.322412   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:00.322907   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:00.322927   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.322938   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.322945   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.324586   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:00.324611   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.324620   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.324625   38254 round_trippers.go:580]     Audit-Id: ed628808-6128-4c07-bf36-e134c049106d
	I0916 10:34:00.324630   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.324637   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.324640   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.324648   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.324874   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:00.820334   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:00.820358   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.820366   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.820370   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.822843   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:00.822870   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.822883   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.822890   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.822895   38254 round_trippers.go:580]     Audit-Id: 93d3ba47-68b9-4337-b09d-edba2937ed08
	I0916 10:34:00.822900   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.822905   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.822910   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.823078   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:00.823574   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:00.823593   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:00.823604   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:00.823611   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:00.825643   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:00.825662   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:00.825670   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:00.825676   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:00.825681   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:00 GMT
	I0916 10:34:00.825685   38254 round_trippers.go:580]     Audit-Id: 2136686e-a753-4e57-b719-93a1b7b6c12c
	I0916 10:34:00.825689   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:00.825694   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:00.825866   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:01.320542   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:01.320566   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.320574   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.320578   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.323004   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:01.323027   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.323038   38254 round_trippers.go:580]     Audit-Id: b8df7cd4-15dc-46b9-8c1c-0ae4633ffde3
	I0916 10:34:01.323043   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.323047   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.323051   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.323055   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.323059   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.323146   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:01.323527   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:01.323541   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.323550   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.323555   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.325649   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:01.325669   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.325675   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.325678   38254 round_trippers.go:580]     Audit-Id: 09891e39-5d40-4516-9eda-852bef0ec59d
	I0916 10:34:01.325681   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.325684   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.325687   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.325690   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.325862   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:01.326191   38254 pod_ready.go:103] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:34:01.820611   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:01.820638   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.820649   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.820654   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.823173   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:01.823194   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.823201   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.823205   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.823208   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.823211   38254 round_trippers.go:580]     Audit-Id: 23913692-996c-43c8-805e-f70780f0630d
	I0916 10:34:01.823214   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.823216   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.823370   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:01.823803   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:01.823818   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:01.823825   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:01.823828   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:01.825788   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:01.825811   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:01.825821   38254 round_trippers.go:580]     Audit-Id: 079e1a21-5253-4f0c-b187-bf832d122510
	I0916 10:34:01.825826   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:01.825833   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:01.825835   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:01.825838   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:01.825841   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:01 GMT
	I0916 10:34:01.825965   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:02.320721   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:02.320748   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.320756   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.320760   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.322968   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:02.322994   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.323001   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.323006   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.323010   38254 round_trippers.go:580]     Audit-Id: 33d8ee38-f4ba-4044-821a-c1a98fc88f52
	I0916 10:34:02.323013   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.323016   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.323019   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.323169   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:02.323569   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:02.323582   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.323588   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.323596   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.325386   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:02.325408   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.325418   38254 round_trippers.go:580]     Audit-Id: 149dee04-a1d1-4a2c-9543-448d002743c1
	I0916 10:34:02.325426   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.325431   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.325437   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.325463   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.325472   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.325653   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:02.820273   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:02.820302   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.820310   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.820314   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.822782   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:02.822807   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.822815   38254 round_trippers.go:580]     Audit-Id: b8c02830-bf20-4086-a35b-5ddf99e664ff
	I0916 10:34:02.822821   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.822826   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.822829   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.822832   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.822836   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.822938   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:02.823337   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:02.823351   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:02.823358   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:02.823363   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:02.825262   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:02.825281   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:02.825290   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:02.825296   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:02.825301   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:02.825307   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:02.825311   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:02 GMT
	I0916 10:34:02.825316   38254 round_trippers.go:580]     Audit-Id: 0d03e038-f5cd-4017-b5c9-4dd1a324073d
	I0916 10:34:02.825520   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:03.320100   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:03.320127   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.320135   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.320140   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.322583   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.322613   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.322622   38254 round_trippers.go:580]     Audit-Id: 13854a33-5ffa-49ec-bd4b-388773c01dd5
	I0916 10:34:03.322629   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.322632   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.322635   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.322638   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.322642   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.322822   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"449","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5421 chars]
	I0916 10:34:03.323193   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:03.323205   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.323212   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.323217   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.324947   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:03.324963   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.324970   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.324976   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.324980   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.324983   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.324986   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.324989   38254 round_trippers.go:580]     Audit-Id: f270afe4-49bc-466e-a752-d87f3eea1493
	I0916 10:34:03.325105   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:03.820870   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931
	I0916 10:34:03.820895   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.820903   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.820907   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.823280   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.823307   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.823318   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.823325   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.823328   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.823332   38254 round_trippers.go:580]     Audit-Id: d27beddf-0127-4261-bf41-c13ee88100e5
	I0916 10:34:03.823335   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.823338   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.823493   38254 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-546931","namespace":"kube-system","uid":"40d727b8-b05b-40b1-9837-87741459ef16","resourceVersion":"533","creationTimestamp":"2024-09-16T10:33:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.mirror":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:22.023555002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5177 chars]
	I0916 10:34:03.823902   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-546931
	I0916 10:34:03.823918   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.823924   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.823928   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.825921   38254 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:34:03.825945   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.825954   38254 round_trippers.go:580]     Audit-Id: 725f2c59-296c-4129-b350-9f76c3e0f784
	I0916 10:34:03.825960   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.825965   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.825969   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.825973   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.825977   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.826082   38254 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-16T10:33:19Z","fieldsType":"FieldsV1","f [truncated 5951 chars]
	I0916 10:34:03.826387   38254 pod_ready.go:93] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:03.826404   38254 pod_ready.go:82] duration metric: took 10.506765676s for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:03.826415   38254 pod_ready.go:39] duration metric: took 10.787048666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:34:03.826433   38254 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:34:03.826480   38254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:34:03.836807   38254 command_runner.go:130] > 3244
	I0916 10:34:03.837712   38254 api_server.go:72] duration metric: took 14.193700208s to wait for apiserver process to appear ...
	I0916 10:34:03.837741   38254 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:34:03.837769   38254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:03.842554   38254 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:34:03.842659   38254 round_trippers.go:463] GET https://192.168.49.2:8441/version
	I0916 10:34:03.842668   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.842676   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.842682   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.843506   38254 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:34:03.843527   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.843533   38254 round_trippers.go:580]     Audit-Id: 4fa131c1-349b-4955-8ff8-e9dd0a8409e7
	I0916 10:34:03.843537   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.843540   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.843543   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.843549   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.843553   38254 round_trippers.go:580]     Content-Length: 263
	I0916 10:34:03.843559   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.843578   38254 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:34:03.843692   38254 api_server.go:141] control plane version: v1.31.1
	I0916 10:34:03.843716   38254 api_server.go:131] duration metric: took 5.967207ms to wait for apiserver health ...
	I0916 10:34:03.843726   38254 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:34:03.843802   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:34:03.843812   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.843822   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.843832   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.846170   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.846194   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.846208   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.846214   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.846219   38254 round_trippers.go:580]     Audit-Id: 95c1ad71-fc2a-4a8b-8ff5-79003879fc7e
	I0916 10:34:03.846224   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.846231   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.846239   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.846721   38254 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"471","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61610 chars]
	I0916 10:34:03.848543   38254 system_pods.go:59] 8 kube-system pods found
	I0916 10:34:03.848585   38254 system_pods.go:61] "coredns-7c65d6cfc9-wjzzx" [2df1d14c-ae32-4b0d-b3fa-6cdcab40919a] Running
	I0916 10:34:03.848593   38254 system_pods.go:61] "etcd-functional-546931" [7fe96e5a-6112-4e96-981b-b15be906fa34] Running
	I0916 10:34:03.848598   38254 system_pods.go:61] "kindnet-6dtx8" [44bb424a-c279-467b-9256-64be125798f9] Running
	I0916 10:34:03.848605   38254 system_pods.go:61] "kube-apiserver-functional-546931" [19d3920d-b342-4764-b722-116797db07ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:34:03.848621   38254 system_pods.go:61] "kube-controller-manager-functional-546931" [49789d64-6fd1-441c-b9e0-470a0832d127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:34:03.848628   38254 system_pods.go:61] "kube-proxy-kshs9" [c2a1ef0a-22f5-4b04-a7fe-30e019b2687b] Running
	I0916 10:34:03.848632   38254 system_pods.go:61] "kube-scheduler-functional-546931" [40d727b8-b05b-40b1-9837-87741459ef16] Running
	I0916 10:34:03.848638   38254 system_pods.go:61] "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running
	I0916 10:34:03.848644   38254 system_pods.go:74] duration metric: took 4.909588ms to wait for pod list to return data ...
	I0916 10:34:03.848654   38254 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:34:03.848728   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:34:03.848736   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.848742   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.848745   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.851110   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.851132   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.851142   38254 round_trippers.go:580]     Audit-Id: 0c4c4e4f-2a84-4e40-8e0b-80cb31bddf7e
	I0916 10:34:03.851149   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.851156   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.851161   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.851167   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.851173   38254 round_trippers.go:580]     Content-Length: 261
	I0916 10:34:03.851177   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.851198   38254 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0e9c2a95-502e-45bd-bfd7-c5d3bafcf61a","resourceVersion":"327","creationTimestamp":"2024-09-16T10:33:26Z"}}]}
	I0916 10:34:03.851350   38254 default_sa.go:45] found service account: "default"
	I0916 10:34:03.851364   38254 default_sa.go:55] duration metric: took 2.70174ms for default service account to be created ...
	I0916 10:34:03.851371   38254 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:34:03.851420   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:34:03.851427   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.851433   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.851437   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.853705   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.853726   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.853735   38254 round_trippers.go:580]     Audit-Id: 04a87712-1128-4c95-a249-6b98ac8a0c1f
	I0916 10:34:03.853739   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.853745   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.853750   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.853755   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.853763   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.854196   38254 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-wjzzx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","resourceVersion":"471","creationTimestamp":"2024-09-16T10:33:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"e5f0af21-e8d5-4d2c-a475-5941bddff6bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:33:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5f0af21-e8d5-4d2c-a475-5941bddff6bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61610 chars]
	I0916 10:34:03.855991   38254 system_pods.go:86] 8 kube-system pods found
	I0916 10:34:03.856010   38254 system_pods.go:89] "coredns-7c65d6cfc9-wjzzx" [2df1d14c-ae32-4b0d-b3fa-6cdcab40919a] Running
	I0916 10:34:03.856015   38254 system_pods.go:89] "etcd-functional-546931" [7fe96e5a-6112-4e96-981b-b15be906fa34] Running
	I0916 10:34:03.856019   38254 system_pods.go:89] "kindnet-6dtx8" [44bb424a-c279-467b-9256-64be125798f9] Running
	I0916 10:34:03.856024   38254 system_pods.go:89] "kube-apiserver-functional-546931" [19d3920d-b342-4764-b722-116797db07ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:34:03.856033   38254 system_pods.go:89] "kube-controller-manager-functional-546931" [49789d64-6fd1-441c-b9e0-470a0832d127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:34:03.856039   38254 system_pods.go:89] "kube-proxy-kshs9" [c2a1ef0a-22f5-4b04-a7fe-30e019b2687b] Running
	I0916 10:34:03.856043   38254 system_pods.go:89] "kube-scheduler-functional-546931" [40d727b8-b05b-40b1-9837-87741459ef16] Running
	I0916 10:34:03.856051   38254 system_pods.go:89] "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running
	I0916 10:34:03.856057   38254 system_pods.go:126] duration metric: took 4.679727ms to wait for k8s-apps to be running ...
	I0916 10:34:03.856063   38254 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:34:03.856106   38254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:34:03.866975   38254 system_svc.go:56] duration metric: took 10.90356ms WaitForService to wait for kubelet
	I0916 10:34:03.867005   38254 kubeadm.go:582] duration metric: took 14.22299597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:34:03.867022   38254 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:34:03.867097   38254 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes
	I0916 10:34:03.867108   38254 round_trippers.go:469] Request Headers:
	I0916 10:34:03.867116   38254 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:34:03.867119   38254 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:34:03.869660   38254 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:34:03.869694   38254 round_trippers.go:577] Response Headers:
	I0916 10:34:03.869702   38254 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9645665d-8732-48e6-aa14-faa8478b6b90
	I0916 10:34:03.869708   38254 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3ac0eda9-5df0-4177-af0c-56923ceb4dd2
	I0916 10:34:03.869713   38254 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:34:03 GMT
	I0916 10:34:03.869718   38254 round_trippers.go:580]     Audit-Id: 53543f02-095e-42de-97a3-11493905ae50
	I0916 10:34:03.869722   38254 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:34:03.869727   38254 round_trippers.go:580]     Content-Type: application/json
	I0916 10:34:03.869909   38254 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"functional-546931","uid":"04242858-d69d-454e-ad64-118215c20e77","resourceVersion":"419","creationTimestamp":"2024-09-16T10:33:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-546931","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-546931","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_33_23_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 6004 chars]
	I0916 10:34:03.870264   38254 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:34:03.870285   38254 node_conditions.go:123] node cpu capacity is 8
	I0916 10:34:03.870297   38254 node_conditions.go:105] duration metric: took 3.26967ms to run NodePressure ...
	I0916 10:34:03.870310   38254 start.go:241] waiting for startup goroutines ...
	I0916 10:34:03.870323   38254 start.go:246] waiting for cluster config update ...
	I0916 10:34:03.870338   38254 start.go:255] writing updated cluster config ...
	I0916 10:34:03.870574   38254 ssh_runner.go:195] Run: rm -f paused
	I0916 10:34:03.877276   38254 out.go:177] * Done! kubectl is now configured to use "functional-546931" cluster and "default" namespace by default
	E0916 10:34:03.878464   38254 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:33:51 functional-546931 crio[2734]: time="2024-09-16 10:33:51.207248970Z" level=info msg="Removing container: 3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0" id=62151ee8-c6a5-464d-8cec-978cf6447b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:33:51 functional-546931 crio[2734]: time="2024-09-16 10:33:51.220509458Z" level=info msg="Removed container 3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0: kube-system/storage-provisioner/storage-provisioner" id=62151ee8-c6a5-464d-8cec-978cf6447b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.127017911Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.130789847Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.130822916Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.130842517Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.134390994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.134424030Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.134441843Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.137780881Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.137811484Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.137824667Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.141166008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:01 functional-546931 crio[2734]: time="2024-09-16 10:34:01.141199175Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.043617802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b19bd0ad-8d17-44c9-a9b4-626c95672d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.043888575Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b19bd0ad-8d17-44c9-a9b4-626c95672d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.044697477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f39dd9d7-ba32-45ca-acb8-d16f771a618c name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.044899874Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f39dd9d7-ba32-45ca-acb8-d16f771a618c name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.045656789Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c30634c2-a767-4c37-8657-e33888d2d54b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.045771114Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.058645949Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2ef8a64a5dc923c464e4178a52da4363133a12d896b2a2bc34be28bf1942ad23/merged/etc/passwd: no such file or directory"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.058689405Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2ef8a64a5dc923c464e4178a52da4363133a12d896b2a2bc34be28bf1942ad23/merged/etc/group: no such file or directory"
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.092922558Z" level=info msg="Created container a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b: kube-system/storage-provisioner/storage-provisioner" id=c30634c2-a767-4c37-8657-e33888d2d54b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.093594796Z" level=info msg="Starting container: a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b" id=973194aa-7683-4156-b951-0194505df2af name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:34:02 functional-546931 crio[2734]: time="2024-09-16 10:34:02.100337706Z" level=info msg="Started container" PID=3687 containerID=a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b description=kube-system/storage-provisioner/storage-provisioner id=973194aa-7683-4156-b951-0194505df2af name=/runtime.v1.RuntimeService/StartContainer sandboxID=2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 seconds ago       Running             storage-provisioner       2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 seconds ago      Running             kube-scheduler            1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago      Running             coredns                   1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	0b7754d27e88e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 seconds ago      Running             kube-apiserver            1                   e87884b43c8cc       kube-apiserver-functional-546931
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 seconds ago      Running             etcd                      1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 seconds ago      Running             kube-controller-manager   1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   17 seconds ago      Running             kindnet-cni               1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 seconds ago      Running             kube-proxy                1                   f14f9778290af       kube-proxy-kshs9
	245fe0ec85c5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Exited              storage-provisioner       1                   2133c690032da       storage-provisioner
	046d8febeb6af       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Exited              coredns                   0                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	fa5a2b32930d3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   39 seconds ago      Exited              kube-proxy                0                   f14f9778290af       kube-proxy-kshs9
	af58051ec3f44       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   39 seconds ago      Exited              kindnet-cni               0                   4aa3f5aefc537       kindnet-6dtx8
	162127b15fc39       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   50 seconds ago      Exited              kube-controller-manager   0                   878410a4a3694       kube-controller-manager-functional-546931
	f2b587ead9ac6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   50 seconds ago      Exited              etcd                      0                   5b3fe285a2416       etcd-functional-546931
	75f3c10606812       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   50 seconds ago      Exited              kube-scheduler            0                   f41f93397a4f0       kube-scheduler-functional-546931
	9821c40f08076       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   50 seconds ago      Exited              kube-apiserver            0                   e87884b43c8cc       kube-apiserver-functional-546931
	
	
	==> coredns [046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44815 - 46736 "HINFO IN 2073509327164801531.6002369803072694315. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010858245s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:34:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:33:38 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     40s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         47s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      40s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 39s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     51s (x7 over 51s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 45s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  45s                kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s                kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s                kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           42s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                29s                kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           11s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:50.922008Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:50.922164Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:50.994403Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:50.995578Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-16T10:33:50.997016Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:33:50.997239Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:50.999359Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:50.997487Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:33:50.997525Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:33:51.496075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [f2b587ead9ac67a13360a9d4e64d8162b8e8a689647afbe35780436d360a37eb] <==
	{"level":"info","ts":"2024-09-16T10:33:17.828400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:17.828407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:17.829406Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.829961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:17.829996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:17.829958Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:17.830200Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:17.830240Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:17.830391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.830513Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.830541Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:17.832163Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:17.831931Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:17.832946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:17.833325Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:33:42.078427Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:33:42.078546Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:33:42.078678Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:33:42.078827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:33:42.101370Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:33:42.101428Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:33:42.102916Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:33:42.104829Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:42.104933Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:33:42.104947Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:34:07 up 16 min,  0 users,  load average: 0.49, 0.44, 0.31
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [af58051ec3f446e206caebc3838a729a7beb518551b7e115d0144408a368ed02] <==
	I0916 10:33:27.696834       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:27.697066       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:27.697210       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:27.697228       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:27.697244       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:28.093776       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:28.093812       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:28.093820       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:28.294858       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:28.294886       1 metrics.go:61] Registering metrics
	I0916 10:33:28.294944       1 controller.go:374] Syncing nftables rules
	I0916 10:33:38.093893       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:38.093946       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	
	
	==> kube-apiserver [0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75] <==
	I0916 10:33:53.025414       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:33:53.025525       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:33:53.025419       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0916 10:33:53.037294       1 controller.go:78] Starting OpenAPI AggregationController
	I0916 10:33:53.110615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:33:53.117071       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:33:53.195239       1 policy_source.go:224] refreshing policies
	I0916 10:33:53.193869       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:33:53.195821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:33:53.194072       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:33:53.194090       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:33:53.194128       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:33:53.194141       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:33:53.194364       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:33:53.194377       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:33:53.196219       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:33:53.197527       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:33:53.197564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:33:53.197596       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:33:53.203974       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:33:53.207909       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:33:53.215549       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:33:54.026461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:33:56.595505       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:33:56.645804       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9821c40f08076f1fcd08c570261337caff5ac2c70338e1d396b48c1755e9df81] <==
	W0916 10:33:42.091265       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091305       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091210       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091323       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091367       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091380       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091412       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091311       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091453       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091440       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091500       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091505       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091502       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091571       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091568       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091662       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091963       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 10:33:42.091981       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0916 10:33:42.091671       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0916 10:33:42.091697       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0916 10:33:42.091760       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.092046       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091824       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.092098       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:33:42.091912       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [162127b15fc39e2896e2d9d1b7635585f8f2b4a6527f07300641d7d9d58dcd02] <==
	I0916 10:33:26.175702       1 shared_informer.go:320] Caches are synced for crt configmap
	I0916 10:33:26.177967       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0916 10:33:26.226006       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:33:26.275043       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:26.279632       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:26.286417       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:26.705856       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:26.793550       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:26.793589       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:26.894010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:27.295783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="490.593748ms"
	I0916 10:33:27.304036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.092516ms"
	I0916 10:33:27.304148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.561µs"
	I0916 10:33:27.315397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.434µs"
	I0916 10:33:27.424337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.189483ms"
	I0916 10:33:27.430800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.42195ms"
	I0916 10:33:27.430920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.394µs"
	I0916 10:33:38.213413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:38.224934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:38.230428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="77.364µs"
	I0916 10:33:38.243910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="90.886µs"
	I0916 10:33:39.144530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.651µs"
	I0916 10:33:39.162343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.700399ms"
	I0916 10:33:39.162441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.723µs"
	I0916 10:33:41.001062       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fa5a2b32930d3ca7d1515596176b902b8d9df0bd0acc464a42a04ce50076709d] <==
	I0916 10:33:27.653280       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:27.801980       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:27.802051       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:27.821462       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:27.821527       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:27.823372       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:27.823814       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:27.823902       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:27.825081       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:27.825126       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:27.825165       1 config.go:328] "Starting node config controller"
	I0916 10:33:27.825175       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:27.825157       1 config.go:199] "Starting service config controller"
	I0916 10:33:27.825211       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:27.926184       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:33:27.926206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:27.926251       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [75f3c10606812bf751e0b5aac23e34c3e091c189d60c36b6c219f0a5d1034534] <==
	W0916 10:33:19.419287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:33:19.419370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:33:19.419487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:33:19.419567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419219       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:19.419640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:19.419337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:33:19.419280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:33:19.419721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.228031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:20.228078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.241756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:20.241792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.275701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:20.275752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.285352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:20.285403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:20.338295       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:33:20.338343       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:33:20.367158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:33:20.367201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 10:33:23.315206       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:33:42.078772       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163358    1678 status_manager.go:851] "Failed to get status for pod" podUID="c02f70efafdd9ad1683640c8d3761d1d" pod="kube-system/kube-controller-manager-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163562    1678 status_manager.go:851] "Failed to get status for pod" podUID="4f74e884ad630d68b59e0dbdb6055584" pod="kube-system/etcd-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163748    1678 status_manager.go:851] "Failed to get status for pod" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" pod="kube-system/kube-apiserver-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.163987    1678 status_manager.go:851] "Failed to get status for pod" podUID="adb8a765a0d6f587897c42f69e87ac66" pod="kube-system/kube-scheduler-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164146    1678 scope.go:117] "RemoveContainer" containerID="046d8febeb6af9daaa76d8c724e4b62b5c5a9a13b80bc4d578544fd7e0f2e50b"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164272    1678 status_manager.go:851] "Failed to get status for pod" podUID="44bb424a-c279-467b-9256-64be125798f9" pod="kube-system/kindnet-6dtx8" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-6dtx8\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164564    1678 status_manager.go:851] "Failed to get status for pod" podUID="c2a1ef0a-22f5-4b04-a7fe-30e019b2687b" pod="kube-system/kube-proxy-kshs9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kshs9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.164802    1678 status_manager.go:851] "Failed to get status for pod" podUID="a7e94614-567e-47ba-a51a-426f09198dba" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.165088    1678 status_manager.go:851] "Failed to get status for pod" podUID="44bb424a-c279-467b-9256-64be125798f9" pod="kube-system/kindnet-6dtx8" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-6dtx8\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.165321    1678 status_manager.go:851] "Failed to get status for pod" podUID="c2a1ef0a-22f5-4b04-a7fe-30e019b2687b" pod="kube-system/kube-proxy-kshs9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kshs9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166136    1678 status_manager.go:851] "Failed to get status for pod" podUID="2df1d14c-ae32-4b0d-b3fa-6cdcab40919a" pod="kube-system/coredns-7c65d6cfc9-wjzzx" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-wjzzx\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166375    1678 status_manager.go:851] "Failed to get status for pod" podUID="a7e94614-567e-47ba-a51a-426f09198dba" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166575    1678 status_manager.go:851] "Failed to get status for pod" podUID="c02f70efafdd9ad1683640c8d3761d1d" pod="kube-system/kube-controller-manager-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166740    1678 status_manager.go:851] "Failed to get status for pod" podUID="4f74e884ad630d68b59e0dbdb6055584" pod="kube-system/etcd-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.166955    1678 status_manager.go:851] "Failed to get status for pod" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" pod="kube-system/kube-apiserver-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: I0916 10:33:50.167270    1678 status_manager.go:851] "Failed to get status for pod" podUID="adb8a765a0d6f587897c42f69e87ac66" pod="kube-system/kube-scheduler-functional-546931" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-546931\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 16 10:33:50 functional-546931 kubelet[1678]: E0916 10:33:50.294098    1678 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-546931.17f5b2fabfbdf074  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-546931,UID:4f74e884ad630d68b59e0dbdb6055584,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-546931,},FirstTimestamp:2024-09-16 10:33:42.194917492 +0000 UTC m=+20.238825086,LastTimestamp:2024-09-16 10:33:42.194917492 +0000 UTC m=+20.238825086,Count:1,Type:Warning,EventTime:000
1-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-546931,}"
	Sep 16 10:33:51 functional-546931 kubelet[1678]: I0916 10:33:51.205790    1678 scope.go:117] "RemoveContainer" containerID="3b28748fb574b6dfe705840291939f42f9acbf474f08cac8b4d24c04e6920fb0"
	Sep 16 10:33:51 functional-546931 kubelet[1678]: I0916 10:33:51.206016    1678 scope.go:117] "RemoveContainer" containerID="245fe0ec85c5b458982c183eaaf1a0eb8937ac0b38e254df02ec5726c325717c"
	Sep 16 10:33:51 functional-546931 kubelet[1678]: E0916 10:33:51.206166    1678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a7e94614-567e-47ba-a51a-426f09198dba)\"" pod="kube-system/storage-provisioner" podUID="a7e94614-567e-47ba-a51a-426f09198dba"
	Sep 16 10:33:52 functional-546931 kubelet[1678]: E0916 10:33:52.113293    1678 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482832113062114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:33:52 functional-546931 kubelet[1678]: E0916 10:33:52.113354    1678 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482832113062114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:02 functional-546931 kubelet[1678]: I0916 10:34:02.043015    1678 scope.go:117] "RemoveContainer" containerID="245fe0ec85c5b458982c183eaaf1a0eb8937ac0b38e254df02ec5726c325717c"
	Sep 16 10:34:02 functional-546931 kubelet[1678]: E0916 10:34:02.115224    1678 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482842114964982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:02 functional-546931 kubelet[1678]: E0916 10:34:02.115263    1678 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482842114964982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [245fe0ec85c5b458982c183eaaf1a0eb8937ac0b38e254df02ec5726c325717c] <==
	I0916 10:33:50.321558       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:33:50.323516       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (423.297µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubectlGetPods (2.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-546931 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-546931 get po -l tier=control-plane -n kube-system -o=json: fork/exec /usr/local/bin/kubectl: exec format error (590.623µs)
functional_test.go:812: failed to get components. args "kubectl --context functional-546931 get po -l tier=control-plane -n kube-system -o=json": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.357583285s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-530798 --log_dir                                                  | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir                                                  | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-530798 --log_dir                                                  | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir                                                  | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir                                                  | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-530798 --log_dir                                                  | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | /tmp/nospam-530798 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-530798                                                         | nospam-530798     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	| start   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:34 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-546931 cache add                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-546931 cache add                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-546931 cache add                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-546931 cache add                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | minikube-local-cache-test:functional-546931                              |                   |         |         |                     |                     |
	| cache   | functional-546931 cache delete                                           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | minikube-local-cache-test:functional-546931                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	| ssh     | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-546931                                                        | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh                                                    | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-546931 cache reload                                           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	| ssh     | functional-546931 ssh                                                    | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-546931 kubectl --                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | --context functional-546931                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:34:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:34:18.404696   42870 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:34:18.404924   42870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:18.404932   42870 out.go:358] Setting ErrFile to fd 2...
	I0916 10:34:18.404935   42870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:18.405123   42870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:34:18.405735   42870 out.go:352] Setting JSON to false
	I0916 10:34:18.406619   42870 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":998,"bootTime":1726481860,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:34:18.406704   42870 start.go:139] virtualization: kvm guest
	I0916 10:34:18.409189   42870 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:34:18.410389   42870 notify.go:220] Checking for updates...
	I0916 10:34:18.410420   42870 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:34:18.411792   42870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:34:18.413367   42870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:34:18.414652   42870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:34:18.415881   42870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:34:18.417261   42870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:34:18.418976   42870 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:34:18.419048   42870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:34:18.442998   42870 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:34:18.443083   42870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:34:18.496440   42870 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:55 SystemTime:2024-09-16 10:34:18.486043043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:34:18.496520   42870 docker.go:318] overlay module found
	I0916 10:34:18.499714   42870 out.go:177] * Using the docker driver based on existing profile
	I0916 10:34:18.501030   42870 start.go:297] selected driver: docker
	I0916 10:34:18.501042   42870 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:18.501120   42870 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:34:18.501191   42870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:34:18.552452   42870 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:55 SystemTime:2024-09-16 10:34:18.54316006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:34:18.553314   42870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:34:18.553367   42870 cni.go:84] Creating CNI manager for ""
	I0916 10:34:18.553402   42870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:34:18.553471   42870 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:18.555748   42870 out.go:177] * Starting "functional-546931" primary control-plane node in "functional-546931" cluster
	I0916 10:34:18.557321   42870 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:34:18.558983   42870 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:34:18.560572   42870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:34:18.560618   42870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:34:18.560629   42870 cache.go:56] Caching tarball of preloaded images
	I0916 10:34:18.560706   42870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:34:18.560732   42870 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:34:18.560741   42870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:34:18.560872   42870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/config.json ...
	W0916 10:34:18.580951   42870 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:34:18.580964   42870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:34:18.581043   42870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:34:18.581055   42870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:34:18.581059   42870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:34:18.581067   42870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:34:18.581073   42870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:34:18.638583   42870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:34:18.638627   42870 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:34:18.638662   42870 start.go:360] acquireMachinesLock for functional-546931: {Name:mk0ba09111db367b90aa515f201f345e63335cec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:34:18.638736   42870 start.go:364] duration metric: took 49.172µs to acquireMachinesLock for "functional-546931"
	I0916 10:34:18.638750   42870 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:34:18.638754   42870 fix.go:54] fixHost starting: 
	I0916 10:34:18.638962   42870 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:34:18.658511   42870 fix.go:112] recreateIfNeeded on functional-546931: state=Running err=<nil>
	W0916 10:34:18.658556   42870 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:34:18.660715   42870 out.go:177] * Updating the running docker "functional-546931" container ...
	I0916 10:34:18.661913   42870 machine.go:93] provisionDockerMachine start ...
	I0916 10:34:18.661973   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:18.679310   42870 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:18.679504   42870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:34:18.679510   42870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:34:18.808722   42870 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546931
	
	I0916 10:34:18.808747   42870 ubuntu.go:169] provisioning hostname "functional-546931"
	I0916 10:34:18.808809   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:18.829310   42870 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:18.829510   42870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:34:18.829518   42870 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546931 && echo "functional-546931" | sudo tee /etc/hostname
	I0916 10:34:18.971945   42870 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546931
	
	I0916 10:34:18.972020   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:18.989407   42870 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:18.989584   42870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:34:18.989595   42870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546931/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:34:19.125378   42870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:34:19.125400   42870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:34:19.125420   42870 ubuntu.go:177] setting up certificates
	I0916 10:34:19.125429   42870 provision.go:84] configureAuth start
	I0916 10:34:19.125488   42870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546931
	I0916 10:34:19.141621   42870 provision.go:143] copyHostCerts
	I0916 10:34:19.141670   42870 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:34:19.141677   42870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:34:19.141736   42870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:34:19.141813   42870 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:34:19.141816   42870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:34:19.141839   42870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:34:19.141902   42870 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:34:19.141906   42870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:34:19.141934   42870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:34:19.141992   42870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.functional-546931 san=[127.0.0.1 192.168.49.2 functional-546931 localhost minikube]
	I0916 10:34:19.203479   42870 provision.go:177] copyRemoteCerts
	I0916 10:34:19.203533   42870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:34:19.203566   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:19.220701   42870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:34:19.313852   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:34:19.335112   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:34:19.356712   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:34:19.379158   42870 provision.go:87] duration metric: took 253.717409ms to configureAuth
	I0916 10:34:19.379184   42870 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:34:19.379387   42870 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:34:19.379492   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:19.397220   42870 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:19.397411   42870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0916 10:34:19.397422   42870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:34:24.759570   42870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:34:24.759584   42870 machine.go:96] duration metric: took 6.097664403s to provisionDockerMachine
	I0916 10:34:24.759594   42870 start.go:293] postStartSetup for "functional-546931" (driver="docker")
	I0916 10:34:24.759604   42870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:34:24.759654   42870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:34:24.759686   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:24.776508   42870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:34:24.870295   42870 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:34:24.873550   42870 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:34:24.873576   42870 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:34:24.873583   42870 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:34:24.873589   42870 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:34:24.873598   42870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:34:24.873663   42870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:34:24.873747   42870 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:34:24.873820   42870 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts -> hosts in /etc/test/nested/copy/11208
	I0916 10:34:24.873862   42870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11208
	I0916 10:34:24.882439   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:34:24.904752   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts --> /etc/test/nested/copy/11208/hosts (40 bytes)
	I0916 10:34:24.927188   42870 start.go:296] duration metric: took 167.581413ms for postStartSetup
	I0916 10:34:24.927255   42870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:34:24.927302   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:24.945076   42870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:34:25.038662   42870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:34:25.043209   42870 fix.go:56] duration metric: took 6.404449811s for fixHost
	I0916 10:34:25.043225   42870 start.go:83] releasing machines lock for "functional-546931", held for 6.404483529s
	I0916 10:34:25.043275   42870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546931
	I0916 10:34:25.060342   42870 ssh_runner.go:195] Run: cat /version.json
	I0916 10:34:25.060382   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:25.060396   42870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:34:25.060440   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:25.077568   42870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:34:25.077805   42870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:34:25.168971   42870 ssh_runner.go:195] Run: systemctl --version
	I0916 10:34:25.243335   42870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:34:25.383290   42870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:34:25.387936   42870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:34:25.396218   42870 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:34:25.396273   42870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:34:25.404555   42870 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:34:25.404569   42870 start.go:495] detecting cgroup driver to use...
	I0916 10:34:25.404602   42870 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:34:25.404650   42870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:34:25.415714   42870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:34:25.425647   42870 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:34:25.425683   42870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:34:25.436583   42870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:34:25.447069   42870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:34:25.560718   42870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:34:25.668184   42870 docker.go:233] disabling docker service ...
	I0916 10:34:25.668230   42870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:34:25.679527   42870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:34:25.690129   42870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:34:25.797664   42870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:34:25.900317   42870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:34:25.910850   42870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:34:25.925589   42870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:34:25.925638   42870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:25.934605   42870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:34:25.934710   42870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:25.943747   42870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:25.952959   42870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:25.962207   42870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:34:25.970755   42870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:25.979743   42870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:25.988272   42870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:25.997079   42870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:34:26.004631   42870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:34:26.012347   42870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:34:26.116499   42870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:34:32.221717   42870 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.105190756s)
	I0916 10:34:32.221735   42870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:34:32.221777   42870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:34:32.225221   42870 start.go:563] Will wait 60s for crictl version
	I0916 10:34:32.225268   42870 ssh_runner.go:195] Run: which crictl
	I0916 10:34:32.228209   42870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:34:32.263004   42870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:34:32.263060   42870 ssh_runner.go:195] Run: crio --version
	I0916 10:34:32.295976   42870 ssh_runner.go:195] Run: crio --version
	I0916 10:34:32.332387   42870 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:34:32.333775   42870 cli_runner.go:164] Run: docker network inspect functional-546931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:34:32.350023   42870 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:34:32.355511   42870 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 10:34:32.357447   42870 kubeadm.go:883] updating cluster {Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:34:32.357563   42870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:34:32.357623   42870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:34:32.396955   42870 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:34:32.396967   42870 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:34:32.397011   42870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:34:32.428450   42870 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:34:32.428462   42870 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:34:32.428468   42870 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 crio true true} ...
	I0916 10:34:32.428553   42870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-546931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:34:32.428612   42870 ssh_runner.go:195] Run: crio config
	I0916 10:34:32.470460   42870 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 10:34:32.470479   42870 cni.go:84] Creating CNI manager for ""
	I0916 10:34:32.470486   42870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:34:32.470493   42870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:34:32.470510   42870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546931 NodeName:functional-546931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:34:32.470617   42870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546931"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:34:32.470667   42870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:34:32.478723   42870 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:34:32.478780   42870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:34:32.486608   42870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0916 10:34:32.502694   42870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:34:32.518702   42870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2005 bytes)
	I0916 10:34:32.535432   42870 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:34:32.538647   42870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:34:32.641013   42870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:34:32.652407   42870 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931 for IP: 192.168.49.2
	I0916 10:34:32.652422   42870 certs.go:194] generating shared ca certs ...
	I0916 10:34:32.652442   42870 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:34:32.652602   42870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:34:32.652634   42870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:34:32.652639   42870 certs.go:256] generating profile certs ...
	I0916 10:34:32.652710   42870 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key
	I0916 10:34:32.652746   42870 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key.94db7109
	I0916 10:34:32.652775   42870 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key
	I0916 10:34:32.652871   42870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:34:32.652893   42870 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:34:32.652898   42870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:34:32.652915   42870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:34:32.652942   42870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:34:32.652958   42870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:34:32.652989   42870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:34:32.653566   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:34:32.675925   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:34:32.698063   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:34:32.719673   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:34:32.741789   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:34:32.763369   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:34:32.785059   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:34:32.806616   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:34:32.828167   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:34:32.850359   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:34:32.872505   42870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:34:32.894606   42870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:34:32.910602   42870 ssh_runner.go:195] Run: openssl version
	I0916 10:34:32.915521   42870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:34:32.924270   42870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:34:32.927633   42870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:34:32.927675   42870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:34:32.934058   42870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:34:32.942315   42870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:34:32.951116   42870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:34:32.954248   42870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:34:32.954292   42870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:34:32.960535   42870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:34:32.969084   42870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:34:32.977846   42870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:32.981066   42870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:32.981126   42870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:32.987356   42870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:34:32.995334   42870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:34:32.998505   42870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:34:33.004611   42870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:34:33.010768   42870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:34:33.016693   42870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:34:33.022696   42870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:34:33.028782   42870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:34:33.034703   42870 kubeadm.go:392] StartCluster: {Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:33.034794   42870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:34:33.034862   42870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:34:33.067945   42870 cri.go:89] found id: "a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b"
	I0916 10:34:33.067961   42870 cri.go:89] found id: "03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a"
	I0916 10:34:33.067965   42870 cri.go:89] found id: "500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af"
	I0916 10:34:33.067968   42870 cri.go:89] found id: "0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75"
	I0916 10:34:33.067971   42870 cri.go:89] found id: "1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54"
	I0916 10:34:33.067975   42870 cri.go:89] found id: "8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b"
	I0916 10:34:33.067978   42870 cri.go:89] found id: "e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e"
	I0916 10:34:33.067981   42870 cri.go:89] found id: "ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b"
	I0916 10:34:33.067983   42870 cri.go:89] found id: ""
	I0916 10:34:33.068023   42870 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 10:34:33.088803   42870 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a/userdata","rootfs":"/var/lib/containers/storage/overlay/b9892146d51788870554f420f664f8985cb1b78a5fc6935f289dfff2e0866ee2/merged","created":"2024-09-16T10:33:50.505526777Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:50.326425808Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"adb8a765a0d6f587897c42f69e87ac66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546931_adb8a765a0d6f587897c42f69e87ac66/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",
\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b9892146d51788870554f420f664f8985cb1b78a5fc6935f289dfff2e0866ee2/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-546931_kube-system_adb8a765a0d6f587897c42f69e87ac66_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f41f93397a4f0c264e393fcd137e74e25b6724eae504ae8f63019cd6de5479ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546931_kube-system_adb8a765a0d6f587897c42f69e87ac66_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_rel
abel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/adb8a765a0d6f587897c42f69e87ac66/containers/kube-scheduler/7438de62\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.hash":"adb8a765a0d6f587897c42f69e87ac66","kubernetes.io/config.seen":"2024-09-16T10:33:16.360795477Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd
107cd75/userdata","rootfs":"/var/lib/containers/storage/overlay/e8f3d20bc7321e60c36643dc26b25f1d9fb825793c929b5bcd7d95a08c7f8677/merged","created":"2024-09-16T10:33:50.503722523Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:50.317781359Z","i
o.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eb02afa85fe4b42d87b2f90fa03a9ee4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-546931_eb02afa85fe4b42d87b2f90fa03a9ee4/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e8f3d20bc7321e60c36643dc26b25f1d9fb825793c929b5bcd7d95a08c7f8677/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_1","io.kub
ernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-546931_kube-system_eb02afa85fe4b42d87b2f90fa03a9ee4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/containers/kube-apiserver/d9194c2f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa
03a9ee4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.h
ash":"eb02afa85fe4b42d87b2f90fa03a9ee4","kubernetes.io/config.seen":"2024-09-16T10:33:16.360791837Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54/userdata","rootfs":"/var/lib/containers/storage/overlay/8240af8c4afb7645fb91c05a51d7d6175766069cba1cf467c69b238ce9dbf42b/merged","created":"2024-09-16T10:33:50.507535651Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.
terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:50.31303573Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4f74e884ad630d68b59e0dbdb6055584\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546931_4f74e884ad630d68b59e0dbdb6055584/etcd/1.log","io.kubernetes.cri-o.Meta
data":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8240af8c4afb7645fb91c05a51d7d6175766069cba1cf467c69b238ce9dbf42b/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5b3fe285a24162add56b997fa0365bd6ab5b37297ca3c927fdbd5f09073a5b2a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546931_kube-system_4f74e884ad630d68b59e0dbdb6055584_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel
\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4f74e884ad630d68b59e0dbdb6055584/containers/etcd/56d450b7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4f74e884ad630d68b59e0dbdb6055584","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"4f74e884ad630d68b59e0dbdb6055584","kubernetes.io/config.seen":"2024-09-16T10:33:16.360785708Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":
"500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af/userdata","rootfs":"/var/lib/containers/storage/overlay/4a03376584e441af107c3c9be69b8b785710e2cf737d3d6cb3c4ac6636bd2293/merged","created":"2024-09-16T10:33:50.500419418Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.p
orts\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:50.321256809Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
"io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-wjzzx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-wjzzx_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4a03376584e441af107c3c9be69b8b785710e2cf737d3d6cb3c4ac6636bd2293/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a8423288f91be1a84a4da521d6ae34bd864cd162a94fbed9d42a73771704123e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a8423288f91be1a84a4da521d6ae34bd864cd162a94fbed9d42a73771704123e","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7
c65d6cfc9-wjzzx_kube-system_2df1d14c-ae32-4b0d-b3fa-6cdcab40919a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/containers/coredns/898637a0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2df1d14c-ae32-4b0d-b3fa-6cdcab40919a/vo
lumes/kubernetes.io~projected/kube-api-access-6nbq8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-wjzzx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2df1d14c-ae32-4b0d-b3fa-6cdcab40919a","kubernetes.io/config.seen":"2024-09-16T10:33:38.232398573Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b/userdata","rootfs":"/var/lib/containers/storage/overlay/e6a889e3f78d68a09ee4e0de13beb146c529c24c70fc7e46d2c67bb12ac11525/merged","created":"2024-09-16T10:33:50.403114963Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container
.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:50.221509038Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"k
ube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-546931\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c02f70efafdd9ad1683640c8d3761d1d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-546931_c02f70efafdd9ad1683640c8d3761d1d/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e6a889e3f78d68a09ee4e0de13beb146c529c24c70fc7e46d2c67bb12ac11525/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/878410a4a3694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"878410a4a3694fdf2132194e1285396dab571b39a68ea3dbdc0049350911800d","io.kubernet
es.cri-o.SandboxName":"k8s_kube-controller-manager-functional-546931_kube-system_c02f70efafdd9ad1683640c8d3761d1d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/containers/kube-controller-manager/42d1af57\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c02f70efafdd9ad1683640c8d3761d1d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kube
rnetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-546931","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes
.pod.uid":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.hash":"c02f70efafdd9ad1683640c8d3761d1d","kubernetes.io/config.seen":"2024-09-16T10:33:16.360793733Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b/userdata","rootfs":"/var/lib/containers/storage/overlay/2ef8a64a5dc923c464e4178a52da4363133a12d896b2a2bc34be28bf1942ad23/merged","created":"2024-09-16T10:34:02.082928415Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6
c6bf961\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:34:02.058500813Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7e94614-567e-47ba-a51a-426f09198dba\"}","io.kubernetes.cri-o.LogPath":"/var/l
og/pods/kube-system_storage-provisioner_a7e94614-567e-47ba-a51a-426f09198dba/storage-provisioner/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2ef8a64a5dc923c464e4178a52da4363133a12d896b2a2bc34be28bf1942ad23/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_a7e94614-567e-47ba-a51a-426f09198dba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"containe
r_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/containers/storage-provisioner/4e3a1ddb\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a7e94614-567e-47ba-a51a-426f09198dba/volumes/kubernetes.io~projected/kube-api-access-2sn2d\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7e94614-567e-47ba-a51a-426f09198dba","kubectl.kubernetes.io/last-applied-con
figuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-09-16T10:33:38.233440095Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab6
7c232b8b/userdata","rootfs":"/var/lib/containers/storage/overlay/acb3fd8e7eb3f15324b676c021f6c5a9201bff2e125f656d428e60ede0f53c9f/merged","created":"2024-09-16T10:33:50.313792846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:50.205505937Z","io.k
ubernetes.cri-o.Image":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-kshs9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-kshs9_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/acb3fd8e7eb3f15324b676c021f6c5a9201bff2e125f656d428e60ede0f53c9f/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-
containers/f14f9778290afbd7383f2dd12ee1f50b74d62f40bf11ae42d2fd8c4a441931e1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f14f9778290afbd7383f2dd12ee1f50b74d62f40bf11ae42d2fd8c4a441931e1","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-kshs9_kube-system_c2a1ef0a-22f5-4b04-a7fe-30e019b2687b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kube
let/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/containers/kube-proxy/93b8406d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b/volumes/kubernetes.io~projected/kube-api-access-j6b95\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-kshs9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b","kubernetes.io/config.seen":"2024-09-16T10:33:27.024180818Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e2626d8943ee8beaea49f2b23d15e1067da25a18b4a
44debc92d42920d43e65e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e/userdata","rootfs":"/var/lib/containers/storage/overlay/44cc104fd161cfddcffde54caf3e378a72f6ebefb2b196a0e4fbe7b4dee1d5c9/merged","created":"2024-09-16T10:33:50.31437313Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e80daca3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e80daca3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e2626d8943ee8beaea49f2b
23d15e1067da25a18b4a44debc92d42920d43e65e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T10:33:50.216131926Z","io.kubernetes.cri-o.Image":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri-o.ImageRef":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-6dtx8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"44bb424a-c279-467b-9256-64be125798f9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-6dtx8_44bb424a-c279-467b-9256-64be125798f9/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/44cc104fd161cfddcffde54caf3e378a72f6ebefb2b196a0e4fbe7b4dee1d5c9/merged","io.kuberne
tes.cri-o.Name":"k8s_kindnet-cni_kindnet-6dtx8_kube-system_44bb424a-c279-467b-9256-64be125798f9_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-6dtx8_kube-system_44bb424a-c279-467b-9256-64be125798f9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-6
4be125798f9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/containers/kindnet-cni/409f6801\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/44bb424a-c279-467b-9256-64be125798f9/volumes/kubernetes.io~projected/kube-api-access-pvmbd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-6dtx8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"44bb424a-c279-467b-9256-64be125798f9","kubernetes.io/config.seen":"2024-09-16T10:33:27.017005789Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0916 10:34:33.089160   42870 cri.go:126] list returned 8 containers
	I0916 10:34:33.089168   42870 cri.go:129] container: {ID:03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a Status:stopped}
	I0916 10:34:33.089180   42870 cri.go:135] skipping {03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089186   42870 cri.go:129] container: {ID:0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75 Status:stopped}
	I0916 10:34:33.089190   42870 cri.go:135] skipping {0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75 stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089192   42870 cri.go:129] container: {ID:1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54 Status:stopped}
	I0916 10:34:33.089195   42870 cri.go:135] skipping {1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54 stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089197   42870 cri.go:129] container: {ID:500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af Status:stopped}
	I0916 10:34:33.089200   42870 cri.go:135] skipping {500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089202   42870 cri.go:129] container: {ID:8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b Status:stopped}
	I0916 10:34:33.089204   42870 cri.go:135] skipping {8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089206   42870 cri.go:129] container: {ID:a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b Status:stopped}
	I0916 10:34:33.089209   42870 cri.go:135] skipping {a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089211   42870 cri.go:129] container: {ID:ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b Status:stopped}
	I0916 10:34:33.089214   42870 cri.go:135] skipping {ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089215   42870 cri.go:129] container: {ID:e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e Status:stopped}
	I0916 10:34:33.089218   42870 cri.go:135] skipping {e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e stopped}: state = "stopped", want "paused"
	I0916 10:34:33.089265   42870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:34:33.098564   42870 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:34:33.098585   42870 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:34:33.098634   42870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:34:33.106374   42870 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:34:33.106831   42870 kubeconfig.go:125] found "functional-546931" server: "https://192.168.49.2:8441"
	I0916 10:34:33.107909   42870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:34:33.115881   42870 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-09-16 10:33:12.281758432 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-09-16 10:34:32.531665406 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0916 10:34:33.115895   42870 kubeadm.go:1160] stopping kube-system containers ...
	I0916 10:34:33.115950   42870 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0916 10:34:33.115995   42870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:34:33.150182   42870 cri.go:89] found id: "a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b"
	I0916 10:34:33.150198   42870 cri.go:89] found id: "03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a"
	I0916 10:34:33.150202   42870 cri.go:89] found id: "500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af"
	I0916 10:34:33.150205   42870 cri.go:89] found id: "0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75"
	I0916 10:34:33.150208   42870 cri.go:89] found id: "1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54"
	I0916 10:34:33.150212   42870 cri.go:89] found id: "8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b"
	I0916 10:34:33.150215   42870 cri.go:89] found id: "e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e"
	I0916 10:34:33.150218   42870 cri.go:89] found id: "ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b"
	I0916 10:34:33.150220   42870 cri.go:89] found id: ""
	I0916 10:34:33.150226   42870 cri.go:252] Stopping containers: [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b 03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a 500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af 0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75 1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54 8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b]
	I0916 10:34:33.150273   42870 ssh_runner.go:195] Run: which crictl
	I0916 10:34:33.153704   42870 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b 03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a 500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af 0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75 1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54 8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b
	I0916 10:34:33.205781   42870 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 10:34:33.324093   42870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:34:33.332677   42870 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 16 10:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 16 10:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 16 10:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 16 10:33 /etc/kubernetes/scheduler.conf
	
	I0916 10:34:33.332740   42870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0916 10:34:33.340615   42870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0916 10:34:33.348585   42870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0916 10:34:33.356151   42870 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:34:33.356201   42870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:34:33.363688   42870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0916 10:34:33.371771   42870 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:34:33.371821   42870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:34:33.379738   42870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:34:33.388218   42870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:34:33.429441   42870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:34:34.441737   42870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012268972s)
	I0916 10:34:34.441765   42870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:34:34.598952   42870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:34:34.646037   42870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:34:34.797303   42870 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:34:34.797393   42870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:34:35.297447   42870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:34:35.797488   42870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:34:35.809255   42870 api_server.go:72] duration metric: took 1.011947889s to wait for apiserver process to appear ...
	I0916 10:34:35.809273   42870 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:34:35.809297   42870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:37.815541   42870 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 10:34:37.815558   42870 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 10:34:37.815571   42870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:37.906890   42870 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:34:37.906912   42870 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:34:38.309386   42870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:38.313362   42870 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:34:38.313379   42870 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:34:38.809419   42870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:38.813496   42870 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:34:38.813543   42870 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:34:39.310058   42870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:39.313847   42870 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:34:39.331358   42870 api_server.go:141] control plane version: v1.31.1
	I0916 10:34:39.331380   42870 api_server.go:131] duration metric: took 3.522100311s to wait for apiserver health ...
	I0916 10:34:39.331438   42870 cni.go:84] Creating CNI manager for ""
	I0916 10:34:39.331444   42870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:34:39.333797   42870 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:34:39.335410   42870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:34:39.339796   42870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:34:39.339805   42870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:34:39.357846   42870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:34:39.684062   42870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:34:39.691278   42870 system_pods.go:59] 8 kube-system pods found
	I0916 10:34:39.691297   42870 system_pods.go:61] "coredns-7c65d6cfc9-wjzzx" [2df1d14c-ae32-4b0d-b3fa-6cdcab40919a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:34:39.691303   42870 system_pods.go:61] "etcd-functional-546931" [7fe96e5a-6112-4e96-981b-b15be906fa34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:34:39.691309   42870 system_pods.go:61] "kindnet-6dtx8" [44bb424a-c279-467b-9256-64be125798f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0916 10:34:39.691313   42870 system_pods.go:61] "kube-apiserver-functional-546931" [3565c428-ff63-4605-844c-8cac37e347ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:34:39.691318   42870 system_pods.go:61] "kube-controller-manager-functional-546931" [49789d64-6fd1-441c-b9e0-470a0832d127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:34:39.691341   42870 system_pods.go:61] "kube-proxy-kshs9" [c2a1ef0a-22f5-4b04-a7fe-30e019b2687b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0916 10:34:39.691346   42870 system_pods.go:61] "kube-scheduler-functional-546931" [40d727b8-b05b-40b1-9837-87741459ef16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 10:34:39.691351   42870 system_pods.go:61] "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:34:39.691355   42870 system_pods.go:74] duration metric: took 7.282103ms to wait for pod list to return data ...
	I0916 10:34:39.691362   42870 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:34:39.695383   42870 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:34:39.695401   42870 node_conditions.go:123] node cpu capacity is 8
	I0916 10:34:39.695412   42870 node_conditions.go:105] duration metric: took 4.046043ms to run NodePressure ...
	I0916 10:34:39.695430   42870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:34:39.955526   42870 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 10:34:39.997085   42870 kubeadm.go:739] kubelet initialised
	I0916 10:34:39.997097   42870 kubeadm.go:740] duration metric: took 41.553814ms waiting for restarted kubelet to initialise ...
	I0916 10:34:39.997104   42870 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:34:40.002514   42870 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:40.507549   42870 pod_ready.go:93] pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:40.507558   42870 pod_ready.go:82] duration metric: took 505.02774ms for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:40.507566   42870 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:42.513204   42870 pod_ready.go:103] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:34:44.513295   42870 pod_ready.go:103] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:34:46.513743   42870 pod_ready.go:103] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:34:49.012956   42870 pod_ready.go:103] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"False"
	I0916 10:34:50.013692   42870 pod_ready.go:93] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:50.013703   42870 pod_ready.go:82] duration metric: took 9.506132402s for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.013713   42870 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.017796   42870 pod_ready.go:93] pod "kube-apiserver-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:50.017805   42870 pod_ready.go:82] duration metric: took 4.087463ms for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.017814   42870 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.021475   42870 pod_ready.go:93] pod "kube-controller-manager-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:50.021482   42870 pod_ready.go:82] duration metric: took 3.663253ms for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.021490   42870 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.024829   42870 pod_ready.go:93] pod "kube-proxy-kshs9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:50.024837   42870 pod_ready.go:82] duration metric: took 3.342518ms for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.024843   42870 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.028066   42870 pod_ready.go:93] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:50.028072   42870 pod_ready.go:82] duration metric: took 3.225055ms for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.028094   42870 pod_ready.go:39] duration metric: took 10.030980935s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:34:50.028108   42870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:34:50.035473   42870 ops.go:34] apiserver oom_adj: -16
	I0916 10:34:50.035483   42870 kubeadm.go:597] duration metric: took 16.936893834s to restartPrimaryControlPlane
	I0916 10:34:50.035490   42870 kubeadm.go:394] duration metric: took 17.000797005s to StartCluster
	I0916 10:34:50.035505   42870 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:34:50.035563   42870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:34:50.036119   42870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:34:50.036319   42870 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:34:50.036396   42870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:34:50.036485   42870 addons.go:69] Setting storage-provisioner=true in profile "functional-546931"
	I0916 10:34:50.036502   42870 addons.go:234] Setting addon storage-provisioner=true in "functional-546931"
	W0916 10:34:50.036509   42870 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:34:50.036512   42870 addons.go:69] Setting default-storageclass=true in profile "functional-546931"
	I0916 10:34:50.036528   42870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-546931"
	I0916 10:34:50.036539   42870 host.go:66] Checking if "functional-546931" exists ...
	I0916 10:34:50.036561   42870 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:34:50.036771   42870 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:34:50.037007   42870 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:34:50.038044   42870 out.go:177] * Verifying Kubernetes components...
	I0916 10:34:50.039830   42870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:34:50.056289   42870 addons.go:234] Setting addon default-storageclass=true in "functional-546931"
	W0916 10:34:50.056305   42870 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:34:50.056336   42870 host.go:66] Checking if "functional-546931" exists ...
	I0916 10:34:50.056825   42870 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
	I0916 10:34:50.060433   42870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:34:50.061861   42870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:34:50.061871   42870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:34:50.061921   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:50.074453   42870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:34:50.074469   42870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:34:50.074528   42870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
	I0916 10:34:50.078569   42870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:34:50.099380   42870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
	I0916 10:34:50.165905   42870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:34:50.178003   42870 node_ready.go:35] waiting up to 6m0s for node "functional-546931" to be "Ready" ...
	I0916 10:34:50.190984   42870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:34:50.202670   42870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:34:50.211957   42870 node_ready.go:49] node "functional-546931" has status "Ready":"True"
	I0916 10:34:50.211971   42870 node_ready.go:38] duration metric: took 33.947522ms for node "functional-546931" to be "Ready" ...
	I0916 10:34:50.211981   42870 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:34:50.414113   42870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.710132   42870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:34:50.711569   42870 addons.go:510] duration metric: took 675.179489ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:34:50.811710   42870 pod_ready.go:93] pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:50.811721   42870 pod_ready.go:82] duration metric: took 397.594343ms for pod "coredns-7c65d6cfc9-wjzzx" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:50.811730   42870 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:51.212104   42870 pod_ready.go:93] pod "etcd-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:51.212114   42870 pod_ready.go:82] duration metric: took 400.380225ms for pod "etcd-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:51.212125   42870 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:51.612237   42870 pod_ready.go:93] pod "kube-apiserver-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:51.612248   42870 pod_ready.go:82] duration metric: took 400.117775ms for pod "kube-apiserver-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:51.612256   42870 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:52.012310   42870 pod_ready.go:93] pod "kube-controller-manager-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:52.012323   42870 pod_ready.go:82] duration metric: took 400.060204ms for pod "kube-controller-manager-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:52.012334   42870 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:52.412376   42870 pod_ready.go:93] pod "kube-proxy-kshs9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:52.412389   42870 pod_ready.go:82] duration metric: took 400.047969ms for pod "kube-proxy-kshs9" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:52.412400   42870 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:52.812439   42870 pod_ready.go:93] pod "kube-scheduler-functional-546931" in "kube-system" namespace has status "Ready":"True"
	I0916 10:34:52.812451   42870 pod_ready.go:82] duration metric: took 400.044534ms for pod "kube-scheduler-functional-546931" in "kube-system" namespace to be "Ready" ...
	I0916 10:34:52.812459   42870 pod_ready.go:39] duration metric: took 2.600468839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:34:52.812473   42870 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:34:52.812521   42870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:34:52.823399   42870 api_server.go:72] duration metric: took 2.787056833s to wait for apiserver process to appear ...
	I0916 10:34:52.823416   42870 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:34:52.823440   42870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:34:52.828063   42870 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:34:52.828895   42870 api_server.go:141] control plane version: v1.31.1
	I0916 10:34:52.828907   42870 api_server.go:131] duration metric: took 5.485517ms to wait for apiserver health ...
	I0916 10:34:52.828915   42870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:34:53.014358   42870 system_pods.go:59] 8 kube-system pods found
	I0916 10:34:53.014375   42870 system_pods.go:61] "coredns-7c65d6cfc9-wjzzx" [2df1d14c-ae32-4b0d-b3fa-6cdcab40919a] Running
	I0916 10:34:53.014379   42870 system_pods.go:61] "etcd-functional-546931" [7fe96e5a-6112-4e96-981b-b15be906fa34] Running
	I0916 10:34:53.014381   42870 system_pods.go:61] "kindnet-6dtx8" [44bb424a-c279-467b-9256-64be125798f9] Running
	I0916 10:34:53.014384   42870 system_pods.go:61] "kube-apiserver-functional-546931" [3565c428-ff63-4605-844c-8cac37e347ad] Running
	I0916 10:34:53.014386   42870 system_pods.go:61] "kube-controller-manager-functional-546931" [49789d64-6fd1-441c-b9e0-470a0832d127] Running
	I0916 10:34:53.014389   42870 system_pods.go:61] "kube-proxy-kshs9" [c2a1ef0a-22f5-4b04-a7fe-30e019b2687b] Running
	I0916 10:34:53.014391   42870 system_pods.go:61] "kube-scheduler-functional-546931" [40d727b8-b05b-40b1-9837-87741459ef16] Running
	I0916 10:34:53.014393   42870 system_pods.go:61] "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running
	I0916 10:34:53.014397   42870 system_pods.go:74] duration metric: took 185.47827ms to wait for pod list to return data ...
	I0916 10:34:53.014403   42870 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:34:53.211641   42870 default_sa.go:45] found service account: "default"
	I0916 10:34:53.211656   42870 default_sa.go:55] duration metric: took 197.248512ms for default service account to be created ...
	I0916 10:34:53.211663   42870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:34:53.414204   42870 system_pods.go:86] 8 kube-system pods found
	I0916 10:34:53.414218   42870 system_pods.go:89] "coredns-7c65d6cfc9-wjzzx" [2df1d14c-ae32-4b0d-b3fa-6cdcab40919a] Running
	I0916 10:34:53.414222   42870 system_pods.go:89] "etcd-functional-546931" [7fe96e5a-6112-4e96-981b-b15be906fa34] Running
	I0916 10:34:53.414225   42870 system_pods.go:89] "kindnet-6dtx8" [44bb424a-c279-467b-9256-64be125798f9] Running
	I0916 10:34:53.414227   42870 system_pods.go:89] "kube-apiserver-functional-546931" [3565c428-ff63-4605-844c-8cac37e347ad] Running
	I0916 10:34:53.414230   42870 system_pods.go:89] "kube-controller-manager-functional-546931" [49789d64-6fd1-441c-b9e0-470a0832d127] Running
	I0916 10:34:53.414233   42870 system_pods.go:89] "kube-proxy-kshs9" [c2a1ef0a-22f5-4b04-a7fe-30e019b2687b] Running
	I0916 10:34:53.414235   42870 system_pods.go:89] "kube-scheduler-functional-546931" [40d727b8-b05b-40b1-9837-87741459ef16] Running
	I0916 10:34:53.414237   42870 system_pods.go:89] "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running
	I0916 10:34:53.414242   42870 system_pods.go:126] duration metric: took 202.575253ms to wait for k8s-apps to be running ...
	I0916 10:34:53.414247   42870 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:34:53.414289   42870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:34:53.424942   42870 system_svc.go:56] duration metric: took 10.680719ms WaitForService to wait for kubelet
	I0916 10:34:53.424960   42870 kubeadm.go:582] duration metric: took 3.388622381s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:34:53.424974   42870 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:34:53.612263   42870 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:34:53.612275   42870 node_conditions.go:123] node cpu capacity is 8
	I0916 10:34:53.612285   42870 node_conditions.go:105] duration metric: took 187.30814ms to run NodePressure ...
	I0916 10:34:53.612295   42870 start.go:241] waiting for startup goroutines ...
	I0916 10:34:53.612301   42870 start.go:246] waiting for cluster config update ...
	I0916 10:34:53.612309   42870 start.go:255] writing updated cluster config ...
	I0916 10:34:53.612576   42870 ssh_runner.go:195] Run: rm -f paused
	I0916 10:34:53.618531   42870 out.go:177] * Done! kubectl is now configured to use "functional-546931" cluster and "default" namespace by default
	E0916 10:34:53.619819   42870 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.024292333Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/29022fbe9d0244ed089617a24c2f8cbe4f08a8beae5ae03f6047b0231cfb03d8/merged/etc/group: no such file or directory"
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.106295532Z" level=info msg="Created container 1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397: kube-system/storage-provisioner/storage-provisioner" id=6e5d3d77-e438-42c6-b76d-99f2ac780b09 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.106950538Z" level=info msg="Starting container: 1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397" id=078be9d7-4208-423d-98dc-9398bc0b12fe name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.108067540Z" level=info msg="Created container 8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d: kube-system/kube-proxy-kshs9/kube-proxy" id=a8be701b-1f8f-4caf-a7ff-4d66d4c2a483 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.108567185Z" level=info msg="Created container 79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50: kube-system/kindnet-6dtx8/kindnet-cni" id=ba44e7ce-e0e6-441d-9bc4-844470033ed3 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.108579843Z" level=info msg="Starting container: 8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d" id=36f9d7a1-12f8-433c-845f-753b4cf92121 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.109002970Z" level=info msg="Starting container: 79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50" id=d735edd1-6ff5-4453-8cfa-4e2448d62728 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.114629954Z" level=info msg="Started container" PID=6309 containerID=1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397 description=kube-system/storage-provisioner/storage-provisioner id=078be9d7-4208-423d-98dc-9398bc0b12fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=2133c690032da3c11e6629bf0f7f0d7b281b7b9a9f111f7eff35d647c3aa1a6b
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.118134977Z" level=info msg="Started container" PID=6334 containerID=79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50 description=kube-system/kindnet-6dtx8/kindnet-cni id=d735edd1-6ff5-4453-8cfa-4e2448d62728 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4aa3f5aefc537ef06f6e109b8262f6eb8c329531691253bf08b7a9b89d8f9c49
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.118623995Z" level=info msg="Started container" PID=6313 containerID=8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d description=kube-system/kube-proxy-kshs9/kube-proxy id=36f9d7a1-12f8-433c-845f-753b4cf92121 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f14f9778290afbd7383f2dd12ee1f50b74d62f40bf11ae42d2fd8c4a441931e1
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.135055953Z" level=info msg="Created container b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb: kube-system/coredns-7c65d6cfc9-wjzzx/coredns" id=8550ea1a-0c54-41a8-aba8-d2b784adb6ec name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.135635325Z" level=info msg="Starting container: b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb" id=0b2f19a6-547b-46c1-86a7-279192d46e7a name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:34:39 functional-546931 crio[5663]: time="2024-09-16 10:34:39.197318909Z" level=info msg="Started container" PID=6371 containerID=b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb description=kube-system/coredns-7c65d6cfc9-wjzzx/coredns id=0b2f19a6-547b-46c1-86a7-279192d46e7a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a8423288f91be1a84a4da521d6ae34bd864cd162a94fbed9d42a73771704123e
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.618921028Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.623191524Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.623230774Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.623253550Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.626701284Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.626735500Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.626761573Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.629964070Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.629991015Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.630002555Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.633237359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.633269299Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b8b7b2145f381       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 seconds ago       Running             coredns                   2                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	79a9d7528eb3f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   15 seconds ago       Running             kindnet-cni               2                   4aa3f5aefc537       kindnet-6dtx8
	8b4c53b5f60bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 seconds ago       Running             kube-proxy                2                   f14f9778290af       kube-proxy-kshs9
	1cc14bbfee0f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago       Running             storage-provisioner       3                   2133c690032da       storage-provisioner
	a27a3ce3a5b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 seconds ago       Running             kube-apiserver            0                   af1925dee3fc2       kube-apiserver-functional-546931
	442cc07de2d20       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   19 seconds ago       Running             etcd                      2                   5b3fe285a2416       etcd-functional-546931
	912dea9fa9508       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   19 seconds ago       Running             kube-scheduler            2                   f41f93397a4f0       kube-scheduler-functional-546931
	dd99b58642bf7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   19 seconds ago       Running             kube-controller-manager   2                   878410a4a3694       kube-controller-manager-functional-546931
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   52 seconds ago       Exited              storage-provisioner       2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   About a minute ago   Exited              kindnet-cni               1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                1                   f14f9778290af       kube-proxy-kshs9
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48590 - 30001 "HINFO IN 6895879156775148846.7943209663817132014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009362696s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:34:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     87s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         94s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 86s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   Starting                 61s                kube-proxy       
	  Normal   NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x7 over 98s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     92s                kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 92s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  92s                kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s                kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 92s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           89s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                76s                kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           58s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  19s (x8 over 20s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 20s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x7 over 20s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.000714]  #3
	[  +0.002750]  #4
	[  +0.001708] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003513] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002098] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.549372Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:19.549504Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:19.549651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.549778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567753Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:19.567807Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:34:19.570718Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570822Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570856Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [442cc07de2d20f1858aca970b1589445d9119ae98c169613f5a7a2162fb91a1f] <==
	{"level":"info","ts":"2024-09-16T10:34:35.628722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:35.628909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.629009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.629102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.630742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:35.630981Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:35.631046Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.631386Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:35.631405Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:36.820902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.824459Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:36.824466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.824748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.826097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.826340Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.827299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:36.827338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:34:54 up 17 min,  0 users,  load average: 0.35, 0.41, 0.30
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50] <==
	I0916 10:34:39.296704       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:34:39.296992       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:34:39.297141       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:34:39.297157       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:34:39.297188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:34:39.618532       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:34:39.618549       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:34:39.618556       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:34:39.918628       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:34:39.918678       1 metrics.go:61] Registering metrics
	I0916 10:34:39.918760       1 controller.go:374] Syncing nftables rules
	I0916 10:34:49.618556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:49.618661       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	I0916 10:34:11.131420       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:11.131464       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46] <==
	I0916 10:34:37.813750       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0916 10:34:37.898681       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:34:37.904389       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:34:37.904522       1 policy_source.go:224] refreshing policies
	I0916 10:34:37.905206       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:34:37.906263       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:34:37.906296       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:37.906350       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:37.906357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:34:37.906380       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:37.906400       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:37.906408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:37.906414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:37.908814       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:37.908932       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:37.908950       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:37.912871       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:34:37.916515       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:37.923754       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:38.812624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:39.678850       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:39.868256       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:39.879574       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:39.941085       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:39.947167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-controller-manager [dd99b58642bf7eb44b7455752a1b25ad758e6d5c63ee32949852dcef8026edae] <==
	I0916 10:34:41.250843       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:34:41.250921       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:34:41.251004       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:34:41.251032       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0916 10:34:41.251056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.839µs"
	I0916 10:34:41.251081       1 shared_informer.go:320] Caches are synced for crt configmap
	I0916 10:34:41.252021       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:34:41.252040       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:34:41.252109       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:34:41.344742       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:34:41.344872       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:34:41.344957       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:34:41.345000       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:34:41.401053       1 shared_informer.go:320] Caches are synced for expand
	I0916 10:34:41.402328       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:34:41.407002       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.421397       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:34:41.425963       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:34:41.426047       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:34:41.446292       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:34:41.449685       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:34:41.455194       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.866377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951654       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951690       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d] <==
	I0916 10:34:39.218200       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:34:39.331180       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:34:39.331273       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:39.352386       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:34:39.352459       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:39.354438       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:39.354816       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:39.354852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:39.355965       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:39.355967       1 config.go:199] "Starting service config controller"
	I0916 10:34:39.356016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:39.356018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:39.356050       1 config.go:328] "Starting node config controller"
	I0916 10:34:39.356062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:39.456934       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:34:39.456969       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:39.456979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:19.550098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:34:19.550186       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:34:19.550394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [912dea9fa95088e76fc67e62800091be16d7f78ce4aebdd582e9645601d028f5] <==
	I0916 10:34:36.496922       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:37.813654       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:37.814327       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:37.814409       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:37.814446       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:37.907304       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:37.907329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.909440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:37.909504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:37.909560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:37.909610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:38.010226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:37 functional-546931 kubelet[6025]: I0916 10:34:37.925371    6025 kubelet_node_status.go:75] "Successfully registered node" node="functional-546931"
	Sep 16 10:34:37 functional-546931 kubelet[6025]: I0916 10:34:37.925420    6025 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:34:37 functional-546931 kubelet[6025]: I0916 10:34:37.926313    6025 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.695080    6025 apiserver.go:52] "Watching apiserver"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.698357    6025 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-546931" podUID="19d3920d-b342-4764-b722-116797db07ca"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.710496    6025 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-546931"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.722044    6025 status_manager.go:875] "Failed to update status for pod" pod="kube-system/kube-apiserver-functional-546931" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19d3920d-b342-4764-b722-116797db07ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-09-16T10:34:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-09-16T10:34:35Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2024-09-16T10:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"20
24-09-16T10:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2024-09-16T10:34:35Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.31.1\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-09-16T10:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ca-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/ca-certificates\\\
",\\\"name\\\":\\\"etc-ca-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/minikube/certs\\\",\\\"name\\\":\\\"k8s-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/usr/local/share/ca-certificates\\\",\\\"name\\\":\\\"usr-local-share-ca-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/usr/share/ca-certificates\\\",\\\"name\\\":\\\"usr-share-ca-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"startTime\\\":\\\"2024-09-16T10:34:35Z\\\"}}\" for pod \"kube-system\"/\"kube-apiserver-functional-546931\": Pod \"kube-apiserver-functional-546931\" is invalid: metadata.uid: Invalid value: \"19d3920d-b342-4764-b722-116797db07ca\": field is immutable"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.736299    6025 scope.go:117] "RemoveContainer" containerID="0b7754d27e88e9a92bd31b9b5d7883173968f607d919cd68525fd33dd107cd75"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.794768    6025 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.813147    6025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-546931" podStartSLOduration=0.813122953 podStartE2EDuration="813.122953ms" podCreationTimestamp="2024-09-16 10:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:34:38.813088977 +0000 UTC m=+4.214337302" watchObservedRunningTime="2024-09-16 10:34:38.813122953 +0000 UTC m=+4.214371278"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827241    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-xtables-lock\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827385    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-cni-cfg\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827427    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a7e94614-567e-47ba-a51a-426f09198dba-tmp\") pod \"storage-provisioner\" (UID: \"a7e94614-567e-47ba-a51a-426f09198dba\") " pod="kube-system/storage-provisioner"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827500    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-lib-modules\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827526    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-xtables-lock\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827581    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-lib-modules\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999207    6025 scope.go:117] "RemoveContainer" containerID="500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999378    6025 scope.go:117] "RemoveContainer" containerID="ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999500    6025 scope.go:117] "RemoveContainer" containerID="e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999567    6025 scope.go:117] "RemoveContainer" containerID="a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b"
	Sep 16 10:34:40 functional-546931 kubelet[6025]: I0916 10:34:40.708631    6025 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" path="/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/volumes"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810256    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810297    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811575    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811622    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397] <==
	I0916 10:34:39.127159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:39.136475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:39.136516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:19.534445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:19.534594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae!
	I0916 10:34:19.534583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (432.525µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/ComponentHealth (2.03s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-546931 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-546931 apply -f testdata/invalidsvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (457.116µs)
functional_test.go:2323: kubectl --context functional-546931 apply -f testdata/invalidsvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/InvalidService (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-546931 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-546931 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-546931 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-546931 --alsologtostderr -v=1] stderr:
I0916 10:35:01.153328   48907 out.go:345] Setting OutFile to fd 1 ...
I0916 10:35:01.153786   48907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:01.153798   48907 out.go:358] Setting ErrFile to fd 2...
I0916 10:35:01.153803   48907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:01.154117   48907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
I0916 10:35:01.154477   48907 mustload.go:65] Loading cluster: functional-546931
I0916 10:35:01.155022   48907 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:01.155591   48907 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:35:01.174761   48907 host.go:66] Checking if "functional-546931" exists ...
I0916 10:35:01.175068   48907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0916 10:35:01.235698   48907 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.222743162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0916 10:35:01.235860   48907 api_server.go:166] Checking apiserver status ...
I0916 10:35:01.235910   48907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0916 10:35:01.235972   48907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:35:01.256598   48907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:35:01.360207   48907 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6147/cgroup
I0916 10:35:01.370053   48907 api_server.go:182] apiserver freezer: "8:freezer:/docker/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/crio/crio-a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46"
I0916 10:35:01.370154   48907 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/crio/crio-a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46/freezer.state
I0916 10:35:01.378361   48907 api_server.go:204] freezer state: "THAWED"
I0916 10:35:01.378392   48907 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0916 10:35:01.394742   48907 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0916 10:35:01.394807   48907 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0916 10:35:01.395025   48907 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:01.395055   48907 addons.go:69] Setting dashboard=true in profile "functional-546931"
I0916 10:35:01.395068   48907 addons.go:234] Setting addon dashboard=true in "functional-546931"
I0916 10:35:01.395110   48907 host.go:66] Checking if "functional-546931" exists ...
I0916 10:35:01.395586   48907 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:35:01.416783   48907 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0916 10:35:01.418357   48907 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0916 10:35:01.419844   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0916 10:35:01.419863   48907 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0916 10:35:01.419917   48907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:35:01.442222   48907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:35:01.556274   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0916 10:35:01.556301   48907 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0916 10:35:01.576303   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0916 10:35:01.576328   48907 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0916 10:35:01.604098   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0916 10:35:01.604127   48907 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0916 10:35:01.627542   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0916 10:35:01.627565   48907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0916 10:35:01.652291   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0916 10:35:01.652313   48907 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0916 10:35:01.673118   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0916 10:35:01.673170   48907 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0916 10:35:01.702510   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0916 10:35:01.702542   48907 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0916 10:35:01.723186   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0916 10:35:01.723216   48907 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0916 10:35:01.742490   48907 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:35:01.742516   48907 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0916 10:35:01.761770   48907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:35:03.014744   48907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.252933696s)
I0916 10:35:03.016669   48907 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-546931 addons enable metrics-server

                                                
                                                
I0916 10:35:03.018121   48907 addons.go:197] Writing out "functional-546931" config to set dashboard=true...
W0916 10:35:03.018358   48907 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0916 10:35:03.019262   48907 kapi.go:59] client config for functional-546931: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0916 10:35:03.031338   48907 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  656bdc51-81d7-49ec-b5ff-35d2a753ae99 688 0 2024-09-16 10:35:02 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-09-16 10:35:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.155.226,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.155.226],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0916 10:35:03.031650   48907 out.go:270] * Launching proxy ...
* Launching proxy ...
I0916 10:35:03.031733   48907 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-546931 proxy --port 36195]
I0916 10:35:03.034111   48907 out.go:201] 
W0916 10:35:03.035616   48907 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
W0916 10:35:03.035635   48907 out.go:270] * 
* 
W0916 10:35:03.037526   48907 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 10:35:03.039114   48907 out.go:201] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.565780162s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-546931 ssh -- ls                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:35 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| service   | functional-546931 service                                                | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC |                     |
	|           | hello-node --url                                                         |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh -n                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | functional-546931 sudo cat                                               |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh cat                                                | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | /mount-9p/test-1726482898825665135                                       |                   |         |         |                     |                     |
	| start     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh mount |                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | grep 9p; ls -la /mount-9p; cat                                           |                   |         |         |                     |                     |
	|           | /mount-9p/pod-dates                                                      |                   |         |         |                     |                     |
	| start     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=docker                                                     |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | -p functional-546931                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh echo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | hello                                                                    |                   |         |         |                     |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port2367125525/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh cat                                                | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | /etc/hostname                                                            |                   |         |         |                     |                     |
	| tunnel    | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel    | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| tunnel    | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh -- ls                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| license   |                                                                          | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:00.918258   48694 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:00.918452   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918475   48694 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:00.918487   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918709   48694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:35:00.919256   48694 out.go:352] Setting JSON to false
	I0916 10:35:00.920662   48694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1041,"bootTime":1726481860,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:00.920778   48694 start.go:139] virtualization: kvm guest
	I0916 10:35:00.924235   48694 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:00.931262   48694 notify.go:220] Checking for updates...
	I0916 10:35:00.931605   48694 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:00.933358   48694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:00.935102   48694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:35:00.936553   48694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:35:00.937907   48694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:00.939153   48694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:00.941266   48694 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:00.942118   48694 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:00.982940   48694 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:35:00.983034   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.072175   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.05984963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.072322   48694 docker.go:318] overlay module found
	I0916 10:35:01.074333   48694 out.go:177] * Using the docker driver based on existing profile
	I0916 10:35:01.075819   48694 start.go:297] selected driver: docker
	I0916 10:35:01.075840   48694 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.075969   48694 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:01.076061   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.145804   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.134479908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.146698   48694 cni.go:84] Creating CNI manager for ""
	I0916 10:35:01.146754   48694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:35:01.146819   48694 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.148893   48694 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.629991015Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.630002555Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.633237359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:34:49 functional-546931 crio[5663]: time="2024-09-16 10:34:49.633269299Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.104800400Z" level=info msg="Running pod sandbox: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/POD" id=15f382fc-2394-47d9-8124-66b1d28d95df name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.104859965Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.106681789Z" level=info msg="Running pod sandbox: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/POD" id=ad660a29-463e-4b5b-941d-6518b3b41834 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.106751859Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.119874253Z" level=info msg="Got pod network &{Name:dashboard-metrics-scraper-c5db448b4-7c2lp Namespace:kubernetes-dashboard ID:03844bf992fc98df6d81bbcc15fb2182753b34df7aabffa7794374eb4e70f936 UID:e8a97415-7eb6-4d52-99c2-916e38eb0960 NetNS:/var/run/netns/93eaae46-4d85-4a9b-b9f6-8be535a5cc8f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.119922347Z" level=info msg="Adding pod kubernetes-dashboard_dashboard-metrics-scraper-c5db448b4-7c2lp to CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.132619365Z" level=info msg="Got pod network &{Name:dashboard-metrics-scraper-c5db448b4-7c2lp Namespace:kubernetes-dashboard ID:03844bf992fc98df6d81bbcc15fb2182753b34df7aabffa7794374eb4e70f936 UID:e8a97415-7eb6-4d52-99c2-916e38eb0960 NetNS:/var/run/netns/93eaae46-4d85-4a9b-b9f6-8be535a5cc8f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.132796484Z" level=info msg="Checking pod kubernetes-dashboard_dashboard-metrics-scraper-c5db448b4-7c2lp for CNI network kindnet (type=ptp)"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.139534092Z" level=info msg="Ran pod sandbox 03844bf992fc98df6d81bbcc15fb2182753b34df7aabffa7794374eb4e70f936 with infra container: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/POD" id=15f382fc-2394-47d9-8124-66b1d28d95df name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.140381100Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-695b96c756-5ftj6 Namespace:kubernetes-dashboard ID:02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 UID:9dae2eb0-2710-46a3-b5e1-17d5ee4b9367 NetNS:/var/run/netns/e397a580-a4b9-4dd4-a293-f15a5e318fbb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.140417549Z" level=info msg="Adding pod kubernetes-dashboard_kubernetes-dashboard-695b96c756-5ftj6 to CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.140976182Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=c8d344c5-38f0-4e48-9c6a-485d121fdc8b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.141267607Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=c8d344c5-38f0-4e48-9c6a-485d121fdc8b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.142391220Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ded3ee86-f493-4cfb-aec9-e6f34e50407c name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.150279875Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.151791656Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-695b96c756-5ftj6 Namespace:kubernetes-dashboard ID:02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 UID:9dae2eb0-2710-46a3-b5e1-17d5ee4b9367 NetNS:/var/run/netns/e397a580-a4b9-4dd4-a293-f15a5e318fbb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.151959311Z" level=info msg="Checking pod kubernetes-dashboard_kubernetes-dashboard-695b96c756-5ftj6 for CNI network kindnet (type=ptp)"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.154117399Z" level=info msg="Ran pod sandbox 02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 with infra container: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/POD" id=ad660a29-463e-4b5b-941d-6518b3b41834 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.155346200Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1dc22da1-f620-47a2-b510-3341b99e3dbf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.155630796Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1dc22da1-f620-47a2-b510-3341b99e3dbf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:04 functional-546931 crio[5663]: time="2024-09-16 10:35:04.199624344Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b8b7b2145f381       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago       Running             coredns                   2                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	79a9d7528eb3f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   25 seconds ago       Running             kindnet-cni               2                   4aa3f5aefc537       kindnet-6dtx8
	8b4c53b5f60bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   25 seconds ago       Running             kube-proxy                2                   f14f9778290af       kube-proxy-kshs9
	1cc14bbfee0f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   25 seconds ago       Running             storage-provisioner       3                   2133c690032da       storage-provisioner
	a27a3ce3a5b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   28 seconds ago       Running             kube-apiserver            0                   af1925dee3fc2       kube-apiserver-functional-546931
	442cc07de2d20       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   28 seconds ago       Running             etcd                      2                   5b3fe285a2416       etcd-functional-546931
	912dea9fa9508       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   28 seconds ago       Running             kube-scheduler            2                   f41f93397a4f0       kube-scheduler-functional-546931
	dd99b58642bf7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   28 seconds ago       Running             kube-controller-manager   2                   878410a4a3694       kube-controller-manager-functional-546931
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   About a minute ago   Exited              kindnet-cni               1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                1                   f14f9778290af       kube-proxy-kshs9
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48590 - 30001 "HINFO IN 6895879156775148846.7943209663817132014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009362696s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:34:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     97s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         104s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      97s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-7c2lp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5ftj6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 96s                  kube-proxy       
	  Normal   Starting                 25s                  kube-proxy       
	  Normal   Starting                 71s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     102s                 kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 102s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  102s                 kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s                 kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           99s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                86s                  kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           68s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   Starting                 30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 30s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  29s (x8 over 30s)    kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29s (x8 over 30s)    kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29s (x7 over 30s)    kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           23s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 10:35] FS-Cache: Duplicate cookie detected
	[  +0.005031] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000007485c404{9P.session} n=000000002b39a795
	[  +0.007541] FS-Cache: O-key=[10] '34323935313533303732'
	[  +0.005370] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.006617] FS-Cache: N-cookie d=000000007485c404{9P.session} n=00000000364f9863
	[  +0.008939] FS-Cache: N-key=[10] '34323935313533303732'
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.549372Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:19.549504Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:19.549651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.549778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567753Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:19.567807Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:34:19.570718Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570822Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570856Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [442cc07de2d20f1858aca970b1589445d9119ae98c169613f5a7a2162fb91a1f] <==
	{"level":"info","ts":"2024-09-16T10:34:35.628722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:35.628909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.629009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.629102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.630742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:35.630981Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:35.631046Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.631386Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:35.631405Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:36.820902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.824459Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:36.824466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.824748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.826097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.826340Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.827299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:36.827338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:35:04 up 17 min,  0 users,  load average: 1.49, 0.65, 0.38
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50] <==
	I0916 10:34:39.296704       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:34:39.296992       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:34:39.297141       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:34:39.297157       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:34:39.297188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:34:39.618532       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:34:39.618549       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:34:39.618556       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:34:39.918628       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:34:39.918678       1 metrics.go:61] Registering metrics
	I0916 10:34:39.918760       1 controller.go:374] Syncing nftables rules
	I0916 10:34:49.618556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:49.618661       1 main.go:299] handling current node
	I0916 10:34:59.625424       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:59.625493       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	I0916 10:34:11.131420       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:11.131464       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46] <==
	I0916 10:34:37.906296       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:37.906350       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:37.906357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:34:37.906380       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:37.906400       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:37.906408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:37.906414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:37.908814       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:37.908932       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:37.908950       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:37.912871       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:34:37.916515       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:37.923754       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:38.812624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:39.678850       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:39.868256       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:39.879574       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:39.941085       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:39.947167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:34:56.583902       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:02.580292       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:35:02.631388       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:35:02.925711       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.155.226"}
	I0916 10:35:02.995387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:03.006863       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.172.127"}
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-controller-manager [dd99b58642bf7eb44b7455752a1b25ad758e6d5c63ee32949852dcef8026edae] <==
	I0916 10:34:41.407002       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.421397       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:34:41.425963       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:34:41.426047       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:34:41.446292       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:34:41.449685       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:34:41.455194       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.866377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951654       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951690       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:02.709012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.757176ms"
	E0916 10:35:02.709065       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.709405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.008906ms"
	E0916 10:35:02.709504       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.642122ms"
	E0916 10:35:02.720256       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.567387ms"
	E0916 10:35:02.720286       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.803923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="82.487261ms"
	I0916 10:35:02.817637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.173054ms"
	I0916 10:35:02.898157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="79.487998ms"
	I0916 10:35:02.898365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="73.07µs"
	I0916 10:35:02.908590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="104.542603ms"
	I0916 10:35:02.908674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="37.49µs"
	I0916 10:35:02.908825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="342.16µs"
	
	
	==> kube-proxy [8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d] <==
	I0916 10:34:39.218200       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:34:39.331180       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:34:39.331273       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:39.352386       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:34:39.352459       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:39.354438       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:39.354816       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:39.354852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:39.355965       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:39.355967       1 config.go:199] "Starting service config controller"
	I0916 10:34:39.356016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:39.356018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:39.356050       1 config.go:328] "Starting node config controller"
	I0916 10:34:39.356062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:39.456934       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:34:39.456969       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:39.456979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:19.550098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:34:19.550186       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:34:19.550394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [912dea9fa95088e76fc67e62800091be16d7f78ce4aebdd582e9645601d028f5] <==
	I0916 10:34:36.496922       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:37.813654       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:37.814327       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:37.814409       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:37.814446       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:37.907304       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:37.907329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.909440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:37.909504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:37.909560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:37.909610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:38.010226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.813147    6025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-546931" podStartSLOduration=0.813122953 podStartE2EDuration="813.122953ms" podCreationTimestamp="2024-09-16 10:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:34:38.813088977 +0000 UTC m=+4.214337302" watchObservedRunningTime="2024-09-16 10:34:38.813122953 +0000 UTC m=+4.214371278"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827241    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-xtables-lock\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827385    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-cni-cfg\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827427    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a7e94614-567e-47ba-a51a-426f09198dba-tmp\") pod \"storage-provisioner\" (UID: \"a7e94614-567e-47ba-a51a-426f09198dba\") " pod="kube-system/storage-provisioner"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827500    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-lib-modules\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827526    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-xtables-lock\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827581    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-lib-modules\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999207    6025 scope.go:117] "RemoveContainer" containerID="500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999378    6025 scope.go:117] "RemoveContainer" containerID="ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999500    6025 scope.go:117] "RemoveContainer" containerID="e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999567    6025 scope.go:117] "RemoveContainer" containerID="a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b"
	Sep 16 10:34:40 functional-546931 kubelet[6025]: I0916 10:34:40.708631    6025 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" path="/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/volumes"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810256    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810297    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811575    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811622    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: E0916 10:35:02.803189    6025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.803256    6025 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900493    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900565    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8a97415-7eb6-4d52-99c2-916e38eb0960-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900597    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4skd\" (UniqueName: \"kubernetes.io/projected/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-kube-api-access-d4skd\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900646    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq9v\" (UniqueName: \"kubernetes.io/projected/e8a97415-7eb6-4d52-99c2-916e38eb0960-kube-api-access-nmq9v\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:03 functional-546931 kubelet[6025]: I0916 10:35:03.009620    6025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.812961    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.813005    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397] <==
	I0916 10:34:39.127159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:39.136475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:39.136516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:56.587879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:56.587950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342 became leader
	I0916 10:34:56.588053       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	I0916 10:34:56.688953       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:19.534445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:19.534594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae!
	I0916 10:34:19.534583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (444.856µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/DashboardCmd (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-546931 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-546931 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (381.378µs)
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-546931 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-546931 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-546931 describe po hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (399.046µs)
functional_test.go:1604: "kubectl --context functional-546931 describe po hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-546931 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-546931 logs -l app=hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (353.063µs)
functional_test.go:1610: "kubectl --context functional-546931 logs -l app=hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-546931 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-546931 describe svc hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (366.506µs)
functional_test.go:1616: "kubectl --context functional-546931 describe svc hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.799211789s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-546931 ssh echo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | hello                                                                    |                   |         |         |                     |                     |
	| mount   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port2367125525/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh cat                                                | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | /etc/hostname                                                            |                   |         |         |                     |                     |
	| tunnel  | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| tunnel  | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh -- ls                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | -T /mount1                                                               |                   |         |         |                     |                     |
	| license |                                                                          | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| mount   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | --kill=true                                                              |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | systemctl is-active containerd                                           |                   |         |         |                     |                     |
	| addons  | functional-546931 addons list                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| addons  | functional-546931 addons list                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -o json                                                                  |                   |         |         |                     |                     |
	| image   | functional-546931 image load --daemon                                    | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | kicbase/echo-server:functional-546931                                    |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image   | functional-546931 image ls                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:00.918258   48694 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:00.918452   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918475   48694 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:00.918487   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918709   48694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:35:00.919256   48694 out.go:352] Setting JSON to false
	I0916 10:35:00.920662   48694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1041,"bootTime":1726481860,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:00.920778   48694 start.go:139] virtualization: kvm guest
	I0916 10:35:00.924235   48694 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:00.931262   48694 notify.go:220] Checking for updates...
	I0916 10:35:00.931605   48694 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:00.933358   48694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:00.935102   48694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:35:00.936553   48694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:35:00.937907   48694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:00.939153   48694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:00.941266   48694 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:00.942118   48694 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:00.982940   48694 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:35:00.983034   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.072175   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.05984963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.072322   48694 docker.go:318] overlay module found
	I0916 10:35:01.074333   48694 out.go:177] * Using the docker driver based on existing profile
	I0916 10:35:01.075819   48694 start.go:297] selected driver: docker
	I0916 10:35:01.075840   48694 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.075969   48694 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:01.076061   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.145804   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.134479908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.146698   48694 cni.go:84] Creating CNI manager for ""
	I0916 10:35:01.146754   48694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:35:01.146819   48694 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.148893   48694 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.151791656Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-695b96c756-5ftj6 Namespace:kubernetes-dashboard ID:02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 UID:9dae2eb0-2710-46a3-b5e1-17d5ee4b9367 NetNS:/var/run/netns/e397a580-a4b9-4dd4-a293-f15a5e318fbb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.151959311Z" level=info msg="Checking pod kubernetes-dashboard_kubernetes-dashboard-695b96c756-5ftj6 for CNI network kindnet (type=ptp)"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.154117399Z" level=info msg="Ran pod sandbox 02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 with infra container: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/POD" id=ad660a29-463e-4b5b-941d-6518b3b41834 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.155346200Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1dc22da1-f620-47a2-b510-3341b99e3dbf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.155630796Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1dc22da1-f620-47a2-b510-3341b99e3dbf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:04 functional-546931 crio[5663]: time="2024-09-16 10:35:04.199624344Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.550214425Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=ded3ee86-f493-4cfb-aec9-e6f34e50407c name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.550997471Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=8b79152a-1a72-494d-ab68-34c6690e82ae name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.551761856Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c],Size_:43824855,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8b79152a-1a72-494d-ab68-34c6690e82ae name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.552231838Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=36f52340-c482-4672-a901-32512ef4d80e name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.552560072Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ca91ce83-b530-4139-b7ce-6e4adb36355b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.553449188Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.553527183Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c],Size_:43824855,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=ca91ce83-b530-4139-b7ce-6e4adb36355b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.554326746Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper" id=98a47449-2f6d-4aa0-95a8-192eaf56a2ad name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.554448824Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.565915372Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e72f8f4f6ca79f0d9f6ed4a6ebc09d72da0e36a4da649d6518503399e365507e/merged/etc/group: no such file or directory"
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.599217946Z" level=info msg="Created container 716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper" id=98a47449-2f6d-4aa0-95a8-192eaf56a2ad name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.599856261Z" level=info msg="Starting container: 716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979" id=de200919-d589-4879-b309-49db1992d297 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.605828097Z" level=info msg="Started container" PID=8987 containerID=716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979 description=kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper id=de200919-d589-4879-b309-49db1992d297 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03844bf992fc98df6d81bbcc15fb2182753b34df7aabffa7794374eb4e70f936
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.556528532Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.808875291Z" level=info msg="Checking image status: kicbase/echo-server:functional-546931" id=a0af7080-7df5-4f06-bd74-44bd8ef316cf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.847591643Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-546931" id=de85d1f1-18ae-4477-a31e-e19d083d37f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.847885607Z" level=info msg="Image docker.io/kicbase/echo-server:functional-546931 not found" id=de85d1f1-18ae-4477-a31e-e19d083d37f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.882554347Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-546931" id=d07d17ef-fa56-4de9-af5b-c072f0bc1893 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.882742608Z" level=info msg="Image localhost/kicbase/echo-server:functional-546931 not found" id=d07d17ef-fa56-4de9-af5b-c072f0bc1893 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	716706ee816f0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   2 seconds ago        Running             dashboard-metrics-scraper   0                   03844bf992fc9       dashboard-metrics-scraper-c5db448b4-7c2lp
	b8b7b2145f381       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 30 seconds ago       Running             coredns                     2                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	79a9d7528eb3f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 30 seconds ago       Running             kindnet-cni                 2                   4aa3f5aefc537       kindnet-6dtx8
	8b4c53b5f60bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 30 seconds ago       Running             kube-proxy                  2                   f14f9778290af       kube-proxy-kshs9
	1cc14bbfee0f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 30 seconds ago       Running             storage-provisioner         3                   2133c690032da       storage-provisioner
	a27a3ce3a5b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 33 seconds ago       Running             kube-apiserver              0                   af1925dee3fc2       kube-apiserver-functional-546931
	442cc07de2d20       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 33 seconds ago       Running             etcd                        2                   5b3fe285a2416       etcd-functional-546931
	912dea9fa9508       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 33 seconds ago       Running             kube-scheduler              2                   f41f93397a4f0       kube-scheduler-functional-546931
	dd99b58642bf7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 33 seconds ago       Running             kube-controller-manager     2                   878410a4a3694       kube-controller-manager-functional-546931
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 About a minute ago   Exited              storage-provisioner         2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 About a minute ago   Exited              kube-scheduler              1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 About a minute ago   Exited              coredns                     1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 About a minute ago   Exited              etcd                        1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 About a minute ago   Exited              kube-controller-manager     1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 About a minute ago   Exited              kindnet-cni                 1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 About a minute ago   Exited              kube-proxy                  1                   f14f9778290af       kube-proxy-kshs9
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48590 - 30001 "HINFO IN 6895879156775148846.7943209663817132014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009362696s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:35:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-7c2lp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5ftj6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 101s                 kube-proxy       
	  Normal   Starting                 29s                  kube-proxy       
	  Normal   Starting                 75s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s (x7 over 113s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     107s                 kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 107s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  107s                 kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s                 kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 107s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           104s                 node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                91s                  kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           73s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   Starting                 35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  34s (x8 over 35s)    kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s (x8 over 35s)    kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s (x7 over 35s)    kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 10:35] FS-Cache: Duplicate cookie detected
	[  +0.005031] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000007485c404{9P.session} n=000000002b39a795
	[  +0.007541] FS-Cache: O-key=[10] '34323935313533303732'
	[  +0.005370] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.006617] FS-Cache: N-cookie d=000000007485c404{9P.session} n=00000000364f9863
	[  +0.008939] FS-Cache: N-key=[10] '34323935313533303732'
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.549372Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:19.549504Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:19.549651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.549778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567753Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:19.567807Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:34:19.570718Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570822Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570856Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [442cc07de2d20f1858aca970b1589445d9119ae98c169613f5a7a2162fb91a1f] <==
	{"level":"info","ts":"2024-09-16T10:34:35.628722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:35.628909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.629009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.629102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.630742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:35.630981Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:35.631046Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.631386Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:35.631405Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:36.820902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.824459Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:36.824466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.824748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.826097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.826340Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.827299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:36.827338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:35:09 up 17 min,  0 users,  load average: 1.53, 0.67, 0.39
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50] <==
	I0916 10:34:39.296704       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:34:39.296992       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:34:39.297141       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:34:39.297157       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:34:39.297188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:34:39.618532       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:34:39.618549       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:34:39.618556       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:34:39.918628       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:34:39.918678       1 metrics.go:61] Registering metrics
	I0916 10:34:39.918760       1 controller.go:374] Syncing nftables rules
	I0916 10:34:49.618556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:49.618661       1 main.go:299] handling current node
	I0916 10:34:59.625424       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:59.625493       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	I0916 10:34:11.131420       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:11.131464       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46] <==
	I0916 10:34:37.906296       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:37.906350       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:37.906357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:34:37.906380       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:37.906400       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:37.906408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:37.906414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:37.908814       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:37.908932       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:37.908950       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:37.912871       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:34:37.916515       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:37.923754       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:38.812624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:39.678850       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:39.868256       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:39.879574       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:39.941085       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:39.947167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:34:56.583902       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:02.580292       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:35:02.631388       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:35:02.925711       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.155.226"}
	I0916 10:35:02.995387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:03.006863       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.172.127"}
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-controller-manager [dd99b58642bf7eb44b7455752a1b25ad758e6d5c63ee32949852dcef8026edae] <==
	I0916 10:34:41.425963       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:34:41.426047       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:34:41.446292       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:34:41.449685       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:34:41.455194       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.866377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951654       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951690       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:02.709012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.757176ms"
	E0916 10:35:02.709065       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.709405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.008906ms"
	E0916 10:35:02.709504       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.642122ms"
	E0916 10:35:02.720256       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.567387ms"
	E0916 10:35:02.720286       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.803923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="82.487261ms"
	I0916 10:35:02.817637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.173054ms"
	I0916 10:35:02.898157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="79.487998ms"
	I0916 10:35:02.898365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="73.07µs"
	I0916 10:35:02.908590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="104.542603ms"
	I0916 10:35:02.908674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="37.49µs"
	I0916 10:35:02.908825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="342.16µs"
	I0916 10:35:06.827697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.271508ms"
	I0916 10:35:06.827804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="51.907µs"
	
	
	==> kube-proxy [8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d] <==
	I0916 10:34:39.218200       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:34:39.331180       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:34:39.331273       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:39.352386       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:34:39.352459       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:39.354438       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:39.354816       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:39.354852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:39.355965       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:39.355967       1 config.go:199] "Starting service config controller"
	I0916 10:34:39.356016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:39.356018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:39.356050       1 config.go:328] "Starting node config controller"
	I0916 10:34:39.356062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:39.456934       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:34:39.456969       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:39.456979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:19.550098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:34:19.550186       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:34:19.550394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [912dea9fa95088e76fc67e62800091be16d7f78ce4aebdd582e9645601d028f5] <==
	I0916 10:34:36.496922       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:37.813654       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:37.814327       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:37.814409       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:37.814446       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:37.907304       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:37.907329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.909440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:37.909504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:37.909560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:37.909610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:38.010226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.813147    6025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-546931" podStartSLOduration=0.813122953 podStartE2EDuration="813.122953ms" podCreationTimestamp="2024-09-16 10:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:34:38.813088977 +0000 UTC m=+4.214337302" watchObservedRunningTime="2024-09-16 10:34:38.813122953 +0000 UTC m=+4.214371278"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827241    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-xtables-lock\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827385    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-cni-cfg\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827427    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a7e94614-567e-47ba-a51a-426f09198dba-tmp\") pod \"storage-provisioner\" (UID: \"a7e94614-567e-47ba-a51a-426f09198dba\") " pod="kube-system/storage-provisioner"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827500    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-lib-modules\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827526    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-xtables-lock\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827581    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-lib-modules\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999207    6025 scope.go:117] "RemoveContainer" containerID="500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999378    6025 scope.go:117] "RemoveContainer" containerID="ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999500    6025 scope.go:117] "RemoveContainer" containerID="e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999567    6025 scope.go:117] "RemoveContainer" containerID="a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b"
	Sep 16 10:34:40 functional-546931 kubelet[6025]: I0916 10:34:40.708631    6025 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" path="/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/volumes"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810256    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810297    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811575    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811622    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: E0916 10:35:02.803189    6025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.803256    6025 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900493    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900565    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8a97415-7eb6-4d52-99c2-916e38eb0960-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900597    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4skd\" (UniqueName: \"kubernetes.io/projected/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-kube-api-access-d4skd\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900646    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq9v\" (UniqueName: \"kubernetes.io/projected/e8a97415-7eb6-4d52-99c2-916e38eb0960-kube-api-access-nmq9v\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:03 functional-546931 kubelet[6025]: I0916 10:35:03.009620    6025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.812961    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.813005    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397] <==
	I0916 10:34:39.127159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:39.136475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:39.136516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:56.587879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:56.587950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342 became leader
	I0916 10:34:56.588053       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	I0916 10:34:56.688953       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:19.534445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:19.534594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae!
	I0916 10:34:19.534583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (528.851µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (79.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a7e94614-567e-47ba-a51a-426f09198dba] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004481027s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (613.81µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (494.518µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (494.499µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (459.859µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (508.172µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (538.096µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (514.675µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (487.66µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (501.225µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (479.301µs)
E0916 10:36:06.691923   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:06.699597   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:06.711153   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:06.732560   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:06.773972   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:06.855410   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:07.016968   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:07.338634   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (514.371µs)
E0916 10:36:07.980444   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:09.262580   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:11.824458   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:16.945839   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-546931 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-546931 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (530.258µs)
functional_test_pvc_test.go:65: failed to check for storage class: fork/exec /usr/local/bin/kubectl: exec format error
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-546931 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:69: (dbg) Non-zero exit: kubectl --context functional-546931 apply -f testdata/storage-provisioner/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (438.31µs)
functional_test_pvc_test.go:71: kubectl apply pvc.yaml failed: args "kubectl --context functional-546931 apply -f testdata/storage-provisioner/pvc.yaml": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.41083859s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-546931 image load --daemon                                      | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | kicbase/echo-server:functional-546931                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /etc/ssl/certs/11208.pem                                                   |                   |         |         |                     |                     |
	| ssh            | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /usr/share/ca-certificates/11208.pem                                       |                   |         |         |                     |                     |
	| image          | functional-546931 image ls                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| ssh            | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /etc/ssl/certs/51391683.0                                                  |                   |         |         |                     |                     |
	| image          | functional-546931 image save kicbase/echo-server:functional-546931         | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /etc/ssl/certs/112082.pem                                                  |                   |         |         |                     |                     |
	| ssh            | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /usr/share/ca-certificates/112082.pem                                      |                   |         |         |                     |                     |
	| ssh            | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                  |                   |         |         |                     |                     |
	| image          | functional-546931 image rm                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | kicbase/echo-server:functional-546931                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-546931 image ls                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| image          | functional-546931 image load                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-546931 image ls                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| image          | functional-546931 image save --daemon                                      | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|                | kicbase/echo-server:functional-546931                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | /etc/test/nested/copy/11208/hosts                                          |                   |         |         |                     |                     |
	| update-context | functional-546931                                                          | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-546931                                                          | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-546931                                                          | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| image          | functional-546931                                                          | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-546931                                                          | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-546931 ssh pgrep                                                | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-546931                                                          | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-546931 image build -t                                           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | localhost/my-image:functional-546931                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-546931                                                          | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-546931 image ls                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:00.918258   48694 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:00.918452   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918475   48694 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:00.918487   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918709   48694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:35:00.919256   48694 out.go:352] Setting JSON to false
	I0916 10:35:00.920662   48694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1041,"bootTime":1726481860,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:00.920778   48694 start.go:139] virtualization: kvm guest
	I0916 10:35:00.924235   48694 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:00.931262   48694 notify.go:220] Checking for updates...
	I0916 10:35:00.931605   48694 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:00.933358   48694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:00.935102   48694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:35:00.936553   48694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:35:00.937907   48694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:00.939153   48694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:00.941266   48694 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:00.942118   48694 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:00.982940   48694 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:35:00.983034   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.072175   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.05984963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.072322   48694 docker.go:318] overlay module found
	I0916 10:35:01.074333   48694 out.go:177] * Using the docker driver based on existing profile
	I0916 10:35:01.075819   48694 start.go:297] selected driver: docker
	I0916 10:35:01.075840   48694 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.075969   48694 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:01.076061   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.145804   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.134479908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.146698   48694 cni.go:84] Creating CNI manager for ""
	I0916 10:35:01.146754   48694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:35:01.146819   48694 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.148893   48694 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.808875291Z" level=info msg="Checking image status: kicbase/echo-server:functional-546931" id=a0af7080-7df5-4f06-bd74-44bd8ef316cf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.847591643Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-546931" id=de85d1f1-18ae-4477-a31e-e19d083d37f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.847885607Z" level=info msg="Image docker.io/kicbase/echo-server:functional-546931 not found" id=de85d1f1-18ae-4477-a31e-e19d083d37f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.882554347Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-546931" id=d07d17ef-fa56-4de9-af5b-c072f0bc1893 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.882742608Z" level=info msg="Image localhost/kicbase/echo-server:functional-546931 not found" id=d07d17ef-fa56-4de9-af5b-c072f0bc1893 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.936425916Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=36f52340-c482-4672-a901-32512ef4d80e name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.937041688Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6f8882c1-4c30-467a-b1e4-9c1c415d2c47 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.937894546Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6f8882c1-4c30-467a-b1e4-9c1c415d2c47 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.938688195Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3009700c-98a6-4f62-8893-ddc7140fdbff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.939373304Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=3009700c-98a6-4f62-8893-ddc7140fdbff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.940125758Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/kubernetes-dashboard" id=5470bce4-f0e4-43ee-b00c-bda6563156a6 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.940219967Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.952789081Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b44af9aa49bd3a7c8ea7269da4af5d7d6a8b034c7e5afba99af14c2bb88835f2/merged/etc/group: no such file or directory"
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.994065621Z" level=info msg="Created container 4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/kubernetes-dashboard" id=5470bce4-f0e4-43ee-b00c-bda6563156a6 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.994750741Z" level=info msg="Starting container: 4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb" id=4f497c1a-5552-4e4e-9038-a4e67ba3996e name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.001803768Z" level=info msg="Started container" PID=10064 containerID=4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb description=kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/kubernetes-dashboard id=4f497c1a-5552-4e4e-9038-a4e67ba3996e name=/runtime.v1.RuntimeService/StartContainer sandboxID=02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.423419210Z" level=info msg="Checking image status: kicbase/echo-server:functional-546931" id=80cb243a-9295-4c61-88c0-cbe23e6ed4da name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.494837914Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-546931" id=7193dfb2-e56b-4c36-92d8-36fbc7b07a12 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.495088463Z" level=info msg="Image docker.io/kicbase/echo-server:functional-546931 not found" id=7193dfb2-e56b-4c36-92d8-36fbc7b07a12 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.529547813Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-546931" id=b78d7fb7-ae21-4d54-901d-188a2da2fcd6 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.529801297Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[localhost/kicbase/echo-server:functional-546931],RepoDigests:[localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf],Size_:4943877,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b78d7fb7-ae21-4d54-901d-188a2da2fcd6 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:34 functional-546931 crio[5663]: time="2024-09-16 10:35:34.696078850Z" level=info msg="Stopping pod sandbox: e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a" id=16fa90dd-0322-4733-b9cc-44d6309e426c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:35:34 functional-546931 crio[5663]: time="2024-09-16 10:35:34.696120604Z" level=info msg="Stopped pod sandbox (already stopped): e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a" id=16fa90dd-0322-4733-b9cc-44d6309e426c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:35:34 functional-546931 crio[5663]: time="2024-09-16 10:35:34.696458462Z" level=info msg="Removing pod sandbox: e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a" id=977dfc80-6a00-4db1-adfa-4b84fbe2a143 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 16 10:35:34 functional-546931 crio[5663]: time="2024-09-16 10:35:34.701815444Z" level=info msg="Removed pod sandbox: e87884b43c8cc0092f8d7daa14566100bae903e05c6780665da03bdf7ce9af2a" id=977dfc80-6a00-4db1-adfa-4b84fbe2a143 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	4857b289b743e       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   02f8caa1ee139       kubernetes-dashboard-695b96c756-5ftj6
	716706ee816f0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   03844bf992fc9       dashboard-metrics-scraper-c5db448b4-7c2lp
	b8b7b2145f381       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 About a minute ago   Running             coredns                     2                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	79a9d7528eb3f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 About a minute ago   Running             kindnet-cni                 2                   4aa3f5aefc537       kindnet-6dtx8
	8b4c53b5f60bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 About a minute ago   Running             kube-proxy                  2                   f14f9778290af       kube-proxy-kshs9
	1cc14bbfee0f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 About a minute ago   Running             storage-provisioner         3                   2133c690032da       storage-provisioner
	a27a3ce3a5b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 About a minute ago   Running             kube-apiserver              0                   af1925dee3fc2       kube-apiserver-functional-546931
	442cc07de2d20       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 About a minute ago   Running             etcd                        2                   5b3fe285a2416       etcd-functional-546931
	912dea9fa9508       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 About a minute ago   Running             kube-scheduler              2                   f41f93397a4f0       kube-scheduler-functional-546931
	dd99b58642bf7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 About a minute ago   Running             kube-controller-manager     2                   878410a4a3694       kube-controller-manager-functional-546931
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 2 minutes ago        Exited              storage-provisioner         2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 2 minutes ago        Exited              kube-scheduler              1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 2 minutes ago        Exited              coredns                     1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 2 minutes ago        Exited              etcd                        1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 2 minutes ago        Exited              kube-controller-manager     1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 2 minutes ago        Exited              kindnet-cni                 1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 2 minutes ago        Exited              kube-proxy                  1                   f14f9778290af       kube-proxy-kshs9
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48590 - 30001 "HINFO IN 6895879156775148846.7943209663817132014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009362696s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:36:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:35:39 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:35:39 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:35:39 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:35:39 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m56s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m3s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m56s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-7c2lp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5ftj6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m55s                kube-proxy       
	  Normal   Starting                 104s                 kube-proxy       
	  Normal   Starting                 2m30s                kube-proxy       
	  Normal   NodeHasSufficientMemory  3m7s (x8 over 3m7s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m7s (x8 over 3m7s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m7s (x7 over 3m7s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     3m1s                 kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 3m1s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m1s                 kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m1s                 kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 3m1s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m58s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                2m45s                kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           2m27s                node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   Starting                 109s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 109s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  108s (x8 over 109s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x8 over 109s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x7 over 109s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           102s                 node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 10:35] FS-Cache: Duplicate cookie detected
	[  +0.005031] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000007485c404{9P.session} n=000000002b39a795
	[  +0.007541] FS-Cache: O-key=[10] '34323935313533303732'
	[  +0.005370] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.006617] FS-Cache: N-cookie d=000000007485c404{9P.session} n=00000000364f9863
	[  +0.008939] FS-Cache: N-key=[10] '34323935313533303732'
	[ +14.884982] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.549372Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:19.549504Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:19.549651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.549778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567753Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:19.567807Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:34:19.570718Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570822Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570856Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [442cc07de2d20f1858aca970b1589445d9119ae98c169613f5a7a2162fb91a1f] <==
	{"level":"info","ts":"2024-09-16T10:34:35.628722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:35.628909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.629009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.629102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.630742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:35.630981Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:35.631046Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.631386Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:35.631405Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:36.820902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.824459Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:36.824466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.824748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.826097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.826340Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.827299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:36.827338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:36:23 up 18 min,  0 users,  load average: 0.61, 0.59, 0.38
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50] <==
	I0916 10:34:39.618549       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:34:39.618556       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:34:39.918628       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:34:39.918678       1 metrics.go:61] Registering metrics
	I0916 10:34:39.918760       1 controller.go:374] Syncing nftables rules
	I0916 10:34:49.618556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:49.618661       1 main.go:299] handling current node
	I0916 10:34:59.625424       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:59.625493       1 main.go:299] handling current node
	I0916 10:35:09.618523       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:35:09.618582       1 main.go:299] handling current node
	I0916 10:35:19.621407       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:35:19.621444       1 main.go:299] handling current node
	I0916 10:35:29.625804       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:35:29.625853       1 main.go:299] handling current node
	I0916 10:35:39.619188       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:35:39.619222       1 main.go:299] handling current node
	I0916 10:35:49.623135       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:35:49.623180       1 main.go:299] handling current node
	I0916 10:35:59.626323       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:35:59.626362       1 main.go:299] handling current node
	I0916 10:36:09.619003       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:36:09.619045       1 main.go:299] handling current node
	I0916 10:36:19.627389       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:36:19.627432       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	I0916 10:34:11.131420       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:11.131464       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46] <==
	I0916 10:34:37.906296       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:37.906350       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:37.906357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:34:37.906380       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:37.906400       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:37.906408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:37.906414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:37.908814       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:37.908932       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:37.908950       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:37.912871       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:34:37.916515       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:37.923754       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:38.812624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:39.678850       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:39.868256       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:39.879574       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:39.941085       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:39.947167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:34:56.583902       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:02.580292       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:35:02.631388       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:35:02.925711       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.155.226"}
	I0916 10:35:02.995387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:03.006863       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.172.127"}
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-controller-manager [dd99b58642bf7eb44b7455752a1b25ad758e6d5c63ee32949852dcef8026edae] <==
	I0916 10:34:41.449685       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:34:41.455194       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.866377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951654       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951690       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:02.709012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.757176ms"
	E0916 10:35:02.709065       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.709405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.008906ms"
	E0916 10:35:02.709504       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.642122ms"
	E0916 10:35:02.720256       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.567387ms"
	E0916 10:35:02.720286       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.803923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="82.487261ms"
	I0916 10:35:02.817637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.173054ms"
	I0916 10:35:02.898157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="79.487998ms"
	I0916 10:35:02.898365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="73.07µs"
	I0916 10:35:02.908590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="104.542603ms"
	I0916 10:35:02.908674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="37.49µs"
	I0916 10:35:02.908825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="342.16µs"
	I0916 10:35:06.827697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.271508ms"
	I0916 10:35:06.827804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="51.907µs"
	I0916 10:35:13.844219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.700699ms"
	I0916 10:35:13.844473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.711µs"
	I0916 10:35:39.041673       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	
	
	==> kube-proxy [8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d] <==
	I0916 10:34:39.218200       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:34:39.331180       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:34:39.331273       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:39.352386       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:34:39.352459       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:39.354438       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:39.354816       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:39.354852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:39.355965       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:39.355967       1 config.go:199] "Starting service config controller"
	I0916 10:34:39.356016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:39.356018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:39.356050       1 config.go:328] "Starting node config controller"
	I0916 10:34:39.356062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:39.456934       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:34:39.456969       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:39.456979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:19.550098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:34:19.550186       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:34:19.550394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [912dea9fa95088e76fc67e62800091be16d7f78ce4aebdd582e9645601d028f5] <==
	I0916 10:34:36.496922       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:37.813654       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:37.814327       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:37.814409       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:37.814446       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:37.907304       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:37.907329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.909440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:37.909504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:37.909560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:37.909610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:38.010226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811622    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: E0916 10:35:02.803189    6025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.803256    6025 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900493    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900565    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8a97415-7eb6-4d52-99c2-916e38eb0960-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900597    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4skd\" (UniqueName: \"kubernetes.io/projected/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-kube-api-access-d4skd\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900646    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq9v\" (UniqueName: \"kubernetes.io/projected/e8a97415-7eb6-4d52-99c2-916e38eb0960-kube-api-access-nmq9v\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:03 functional-546931 kubelet[6025]: I0916 10:35:03.009620    6025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.812961    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.813005    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:13 functional-546931 kubelet[6025]: I0916 10:35:13.835670    6025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp" podStartSLOduration=8.425084592 podStartE2EDuration="11.835645242s" podCreationTimestamp="2024-09-16 10:35:02 +0000 UTC" firstStartedPulling="2024-09-16 10:35:03.1414915 +0000 UTC m=+28.542739814" lastFinishedPulling="2024-09-16 10:35:06.552052157 +0000 UTC m=+31.953300464" observedRunningTime="2024-09-16 10:35:06.822000908 +0000 UTC m=+32.223249234" watchObservedRunningTime="2024-09-16 10:35:13.835645242 +0000 UTC m=+39.236893567"
	Sep 16 10:35:14 functional-546931 kubelet[6025]: E0916 10:35:14.814590    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482914814401432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:186314,},InodesUsed:&UInt64Value{Value:94,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:14 functional-546931 kubelet[6025]: E0916 10:35:14.814634    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482914814401432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:186314,},InodesUsed:&UInt64Value{Value:94,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:24 functional-546931 kubelet[6025]: E0916 10:35:24.816711    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482924816444150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:24 functional-546931 kubelet[6025]: E0916 10:35:24.816757    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482924816444150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:34 functional-546931 kubelet[6025]: E0916 10:35:34.818474    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482934818315039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:34 functional-546931 kubelet[6025]: E0916 10:35:34.818517    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482934818315039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:44 functional-546931 kubelet[6025]: E0916 10:35:44.819666    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482944819470954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:44 functional-546931 kubelet[6025]: E0916 10:35:44.819713    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482944819470954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:54 functional-546931 kubelet[6025]: E0916 10:35:54.820994    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482954820803242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:54 functional-546931 kubelet[6025]: E0916 10:35:54.821036    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482954820803242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:04 functional-546931 kubelet[6025]: E0916 10:36:04.822487    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482964822265088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:04 functional-546931 kubelet[6025]: E0916 10:36:04.822803    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482964822265088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:14 functional-546931 kubelet[6025]: E0916 10:36:14.824085    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482974823918695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:14 functional-546931 kubelet[6025]: E0916 10:36:14.824130    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482974823918695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211160,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb] <==
	2024/09/16 10:35:13 Starting overwatch
	2024/09/16 10:35:13 Using namespace: kubernetes-dashboard
	2024/09/16 10:35:13 Using in-cluster config to connect to apiserver
	2024/09/16 10:35:13 Using secret token for csrf signing
	2024/09/16 10:35:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:35:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:35:13 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:35:13 Generating JWE encryption key
	2024/09/16 10:35:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:35:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:35:13 Initializing JWE encryption key from synchronized object
	2024/09/16 10:35:13 Creating in-cluster Sidecar client
	2024/09/16 10:35:13 Successful request to sidecar
	2024/09/16 10:35:13 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397] <==
	I0916 10:34:39.127159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:39.136475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:39.136516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:56.587879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:56.587950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342 became leader
	I0916 10:34:56.588053       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	I0916 10:34:56.688953       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:19.534445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:19.534594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae!
	I0916 10:34:19.534583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (460.54µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
E0916 10:36:27.187160   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:36:47.668651   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (79.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-546931 replace --force -f testdata/mysql.yaml
functional_test.go:1793: (dbg) Non-zero exit: kubectl --context functional-546931 replace --force -f testdata/mysql.yaml: fork/exec /usr/local/bin/kubectl: exec format error (587.223µs)
functional_test.go:1795: failed to kubectl replace mysql: args "kubectl --context functional-546931 replace --force -f testdata/mysql.yaml" failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.69942376s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-546931 ssh findmnt                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | -T /mount1                                                                 |                   |         |         |                     |                     |
	| license |                                                                            | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| mount   | -p functional-546931                                                       | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount2     |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                     |                   |         |         |                     |                     |
	| mount   | -p functional-546931                                                       | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount3     |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                     |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -T /mount1                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -T /mount2                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh findmnt                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -T /mount3                                                                 |                   |         |         |                     |                     |
	| mount   | -p functional-546931                                                       | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | --kill=true                                                                |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | systemctl is-active docker                                                 |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | systemctl is-active containerd                                             |                   |         |         |                     |                     |
	| addons  | functional-546931 addons list                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| addons  | functional-546931 addons list                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | -o json                                                                    |                   |         |         |                     |                     |
	| image   | functional-546931 image load --daemon                                      | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | kicbase/echo-server:functional-546931                                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image   | functional-546931 image ls                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| image   | functional-546931 image load --daemon                                      | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | kicbase/echo-server:functional-546931                                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image   | functional-546931 image ls                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| image   | functional-546931 image load --daemon                                      | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | kicbase/echo-server:functional-546931                                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | /etc/ssl/certs/11208.pem                                                   |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | /usr/share/ca-certificates/11208.pem                                       |                   |         |         |                     |                     |
	| image   | functional-546931 image ls                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| ssh     | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | /etc/ssl/certs/51391683.0                                                  |                   |         |         |                     |                     |
	| image   | functional-546931 image save kicbase/echo-server:functional-546931         | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | /etc/ssl/certs/112082.pem                                                  |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | /usr/share/ca-certificates/112082.pem                                      |                   |         |         |                     |                     |
	| ssh     | functional-546931 ssh sudo cat                                             | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                                  |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:00.918258   48694 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:00.918452   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918475   48694 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:00.918487   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918709   48694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:35:00.919256   48694 out.go:352] Setting JSON to false
	I0916 10:35:00.920662   48694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1041,"bootTime":1726481860,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:00.920778   48694 start.go:139] virtualization: kvm guest
	I0916 10:35:00.924235   48694 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:00.931262   48694 notify.go:220] Checking for updates...
	I0916 10:35:00.931605   48694 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:00.933358   48694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:00.935102   48694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:35:00.936553   48694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:35:00.937907   48694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:00.939153   48694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:00.941266   48694 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:00.942118   48694 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:00.982940   48694 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:35:00.983034   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.072175   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.05984963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.072322   48694 docker.go:318] overlay module found
	I0916 10:35:01.074333   48694 out.go:177] * Using the docker driver based on existing profile
	I0916 10:35:01.075819   48694 start.go:297] selected driver: docker
	I0916 10:35:01.075840   48694 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.075969   48694 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:01.076061   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.145804   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.134479908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.146698   48694 cni.go:84] Creating CNI manager for ""
	I0916 10:35:01.146754   48694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:35:01.146819   48694 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.148893   48694 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.599217946Z" level=info msg="Created container 716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper" id=98a47449-2f6d-4aa0-95a8-192eaf56a2ad name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.599856261Z" level=info msg="Starting container: 716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979" id=de200919-d589-4879-b309-49db1992d297 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.605828097Z" level=info msg="Started container" PID=8987 containerID=716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979 description=kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper id=de200919-d589-4879-b309-49db1992d297 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03844bf992fc98df6d81bbcc15fb2182753b34df7aabffa7794374eb4e70f936
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.556528532Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.808875291Z" level=info msg="Checking image status: kicbase/echo-server:functional-546931" id=a0af7080-7df5-4f06-bd74-44bd8ef316cf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.847591643Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-546931" id=de85d1f1-18ae-4477-a31e-e19d083d37f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.847885607Z" level=info msg="Image docker.io/kicbase/echo-server:functional-546931 not found" id=de85d1f1-18ae-4477-a31e-e19d083d37f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.882554347Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-546931" id=d07d17ef-fa56-4de9-af5b-c072f0bc1893 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:07 functional-546931 crio[5663]: time="2024-09-16 10:35:07.882742608Z" level=info msg="Image localhost/kicbase/echo-server:functional-546931 not found" id=d07d17ef-fa56-4de9-af5b-c072f0bc1893 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.936425916Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=36f52340-c482-4672-a901-32512ef4d80e name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.937041688Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6f8882c1-4c30-467a-b1e4-9c1c415d2c47 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.937894546Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6f8882c1-4c30-467a-b1e4-9c1c415d2c47 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.938688195Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3009700c-98a6-4f62-8893-ddc7140fdbff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.939373304Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=3009700c-98a6-4f62-8893-ddc7140fdbff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.940125758Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/kubernetes-dashboard" id=5470bce4-f0e4-43ee-b00c-bda6563156a6 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.940219967Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.952789081Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b44af9aa49bd3a7c8ea7269da4af5d7d6a8b034c7e5afba99af14c2bb88835f2/merged/etc/group: no such file or directory"
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.994065621Z" level=info msg="Created container 4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/kubernetes-dashboard" id=5470bce4-f0e4-43ee-b00c-bda6563156a6 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:12 functional-546931 crio[5663]: time="2024-09-16 10:35:12.994750741Z" level=info msg="Starting container: 4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb" id=4f497c1a-5552-4e4e-9038-a4e67ba3996e name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.001803768Z" level=info msg="Started container" PID=10064 containerID=4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb description=kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/kubernetes-dashboard id=4f497c1a-5552-4e4e-9038-a4e67ba3996e name=/runtime.v1.RuntimeService/StartContainer sandboxID=02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.423419210Z" level=info msg="Checking image status: kicbase/echo-server:functional-546931" id=80cb243a-9295-4c61-88c0-cbe23e6ed4da name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.494837914Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-546931" id=7193dfb2-e56b-4c36-92d8-36fbc7b07a12 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.495088463Z" level=info msg="Image docker.io/kicbase/echo-server:functional-546931 not found" id=7193dfb2-e56b-4c36-92d8-36fbc7b07a12 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.529547813Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-546931" id=b78d7fb7-ae21-4d54-901d-188a2da2fcd6 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:13 functional-546931 crio[5663]: time="2024-09-16 10:35:13.529801297Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[localhost/kicbase/echo-server:functional-546931],RepoDigests:[localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf],Size_:4943877,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b78d7fb7-ae21-4d54-901d-188a2da2fcd6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED                  STATE               NAME                        ATTEMPT             POD ID              POD
	4857b289b743e       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         Less than a second ago   Running             kubernetes-dashboard        0                   02f8caa1ee139       kubernetes-dashboard-695b96c756-5ftj6
	716706ee816f0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   7 seconds ago            Running             dashboard-metrics-scraper   0                   03844bf992fc9       dashboard-metrics-scraper-c5db448b4-7c2lp
	b8b7b2145f381       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 34 seconds ago           Running             coredns                     2                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	79a9d7528eb3f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 34 seconds ago           Running             kindnet-cni                 2                   4aa3f5aefc537       kindnet-6dtx8
	8b4c53b5f60bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 34 seconds ago           Running             kube-proxy                  2                   f14f9778290af       kube-proxy-kshs9
	1cc14bbfee0f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 34 seconds ago           Running             storage-provisioner         3                   2133c690032da       storage-provisioner
	a27a3ce3a5b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 38 seconds ago           Running             kube-apiserver              0                   af1925dee3fc2       kube-apiserver-functional-546931
	442cc07de2d20       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 38 seconds ago           Running             etcd                        2                   5b3fe285a2416       etcd-functional-546931
	912dea9fa9508       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 38 seconds ago           Running             kube-scheduler              2                   f41f93397a4f0       kube-scheduler-functional-546931
	dd99b58642bf7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 38 seconds ago           Running             kube-controller-manager     2                   878410a4a3694       kube-controller-manager-functional-546931
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 About a minute ago       Exited              storage-provisioner         2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 About a minute ago       Exited              kube-scheduler              1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 About a minute ago       Exited              coredns                     1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 About a minute ago       Exited              etcd                        1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 About a minute ago       Exited              kube-controller-manager     1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 About a minute ago       Exited              kindnet-cni                 1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 About a minute ago       Exited              kube-proxy                  1                   f14f9778290af       kube-proxy-kshs9
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48590 - 30001 "HINFO IN 6895879156775148846.7943209663817132014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009362696s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:35:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-7c2lp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5ftj6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 34s                  kube-proxy       
	  Normal   Starting                 80s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 111s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           108s                 node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                95s                  kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           77s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   Starting                 39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  38s (x8 over 39s)    kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s (x8 over 39s)    kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x7 over 39s)    kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 10:35] FS-Cache: Duplicate cookie detected
	[  +0.005031] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000007485c404{9P.session} n=000000002b39a795
	[  +0.007541] FS-Cache: O-key=[10] '34323935313533303732'
	[  +0.005370] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.006617] FS-Cache: N-cookie d=000000007485c404{9P.session} n=00000000364f9863
	[  +0.008939] FS-Cache: N-key=[10] '34323935313533303732'
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.549372Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:19.549504Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:19.549651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.549778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567753Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:19.567807Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:34:19.570718Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570822Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570856Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [442cc07de2d20f1858aca970b1589445d9119ae98c169613f5a7a2162fb91a1f] <==
	{"level":"info","ts":"2024-09-16T10:34:35.628722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:35.628909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.629009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.629102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.630742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:35.630981Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:35.631046Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.631386Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:35.631405Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:36.820902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.824459Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:36.824466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.824748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.826097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.826340Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.827299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:36.827338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:35:13 up 17 min,  0 users,  load average: 1.73, 0.72, 0.41
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50] <==
	I0916 10:34:39.296704       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:34:39.296992       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:34:39.297141       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:34:39.297157       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:34:39.297188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:34:39.618532       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:34:39.618549       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:34:39.618556       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:34:39.918628       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:34:39.918678       1 metrics.go:61] Registering metrics
	I0916 10:34:39.918760       1 controller.go:374] Syncing nftables rules
	I0916 10:34:49.618556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:49.618661       1 main.go:299] handling current node
	I0916 10:34:59.625424       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:59.625493       1 main.go:299] handling current node
	I0916 10:35:09.618523       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:35:09.618582       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	I0916 10:34:11.131420       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:11.131464       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46] <==
	I0916 10:34:37.906296       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:37.906350       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:37.906357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:34:37.906380       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:37.906400       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:37.906408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:37.906414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:37.908814       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:37.908932       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:37.908950       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:37.912871       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:34:37.916515       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:37.923754       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:38.812624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:39.678850       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:39.868256       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:39.879574       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:39.941085       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:39.947167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:34:56.583902       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:02.580292       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:35:02.631388       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:35:02.925711       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.155.226"}
	I0916 10:35:02.995387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:03.006863       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.172.127"}
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-controller-manager [dd99b58642bf7eb44b7455752a1b25ad758e6d5c63ee32949852dcef8026edae] <==
	I0916 10:34:41.446292       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:34:41.449685       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:34:41.455194       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.866377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951654       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951690       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:02.709012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.757176ms"
	E0916 10:35:02.709065       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.709405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.008906ms"
	E0916 10:35:02.709504       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.642122ms"
	E0916 10:35:02.720256       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.567387ms"
	E0916 10:35:02.720286       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.803923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="82.487261ms"
	I0916 10:35:02.817637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.173054ms"
	I0916 10:35:02.898157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="79.487998ms"
	I0916 10:35:02.898365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="73.07µs"
	I0916 10:35:02.908590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="104.542603ms"
	I0916 10:35:02.908674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="37.49µs"
	I0916 10:35:02.908825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="342.16µs"
	I0916 10:35:06.827697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.271508ms"
	I0916 10:35:06.827804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="51.907µs"
	I0916 10:35:13.844219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.700699ms"
	I0916 10:35:13.844473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.711µs"
	
	
	==> kube-proxy [8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d] <==
	I0916 10:34:39.218200       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:34:39.331180       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:34:39.331273       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:39.352386       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:34:39.352459       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:39.354438       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:39.354816       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:39.354852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:39.355965       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:39.355967       1 config.go:199] "Starting service config controller"
	I0916 10:34:39.356016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:39.356018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:39.356050       1 config.go:328] "Starting node config controller"
	I0916 10:34:39.356062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:39.456934       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:34:39.456969       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:39.456979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:19.550098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:34:19.550186       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:34:19.550394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [912dea9fa95088e76fc67e62800091be16d7f78ce4aebdd582e9645601d028f5] <==
	I0916 10:34:36.496922       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:37.813654       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:37.814327       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:37.814409       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:37.814446       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:37.907304       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:37.907329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.909440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:37.909504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:37.909560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:37.909610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:38.010226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827241    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-xtables-lock\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827385    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-cni-cfg\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827427    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a7e94614-567e-47ba-a51a-426f09198dba-tmp\") pod \"storage-provisioner\" (UID: \"a7e94614-567e-47ba-a51a-426f09198dba\") " pod="kube-system/storage-provisioner"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827500    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-lib-modules\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827526    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-xtables-lock\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827581    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-lib-modules\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999207    6025 scope.go:117] "RemoveContainer" containerID="500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999378    6025 scope.go:117] "RemoveContainer" containerID="ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999500    6025 scope.go:117] "RemoveContainer" containerID="e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999567    6025 scope.go:117] "RemoveContainer" containerID="a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b"
	Sep 16 10:34:40 functional-546931 kubelet[6025]: I0916 10:34:40.708631    6025 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" path="/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/volumes"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810256    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810297    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811575    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811622    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: E0916 10:35:02.803189    6025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.803256    6025 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900493    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900565    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8a97415-7eb6-4d52-99c2-916e38eb0960-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900597    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4skd\" (UniqueName: \"kubernetes.io/projected/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-kube-api-access-d4skd\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900646    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq9v\" (UniqueName: \"kubernetes.io/projected/e8a97415-7eb6-4d52-99c2-916e38eb0960-kube-api-access-nmq9v\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:03 functional-546931 kubelet[6025]: I0916 10:35:03.009620    6025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.812961    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.813005    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:13 functional-546931 kubelet[6025]: I0916 10:35:13.835670    6025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp" podStartSLOduration=8.425084592 podStartE2EDuration="11.835645242s" podCreationTimestamp="2024-09-16 10:35:02 +0000 UTC" firstStartedPulling="2024-09-16 10:35:03.1414915 +0000 UTC m=+28.542739814" lastFinishedPulling="2024-09-16 10:35:06.552052157 +0000 UTC m=+31.953300464" observedRunningTime="2024-09-16 10:35:06.822000908 +0000 UTC m=+32.223249234" watchObservedRunningTime="2024-09-16 10:35:13.835645242 +0000 UTC m=+39.236893567"
	
	
	==> kubernetes-dashboard [4857b289b743e68c2d752b6b42d6dc46a1822cc11a3439e84dae59f3cd0fcafb] <==
	2024/09/16 10:35:13 Using namespace: kubernetes-dashboard
	2024/09/16 10:35:13 Using in-cluster config to connect to apiserver
	2024/09/16 10:35:13 Using secret token for csrf signing
	2024/09/16 10:35:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:35:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:35:13 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:35:13 Generating JWE encryption key
	2024/09/16 10:35:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:35:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:35:13 Initializing JWE encryption key from synchronized object
	2024/09/16 10:35:13 Creating in-cluster Sidecar client
	2024/09/16 10:35:13 Successful request to sidecar
	2024/09/16 10:35:13 Serving insecurely on HTTP port: 9090
	2024/09/16 10:35:13 Starting overwatch
	
	
	==> storage-provisioner [1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397] <==
	I0916 10:34:39.127159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:39.136475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:39.136516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:56.587879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:56.587950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342 became leader
	I0916 10:34:56.588053       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	I0916 10:34:56.688953       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:19.534445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:19.534594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae!
	I0916 10:34:19.534583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (455.972µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/MySQL (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-546931 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-546931 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": fork/exec /usr/local/bin/kubectl: exec format error (491.669µs)
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-546931 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-546931
helpers_test.go:235: (dbg) docker inspect functional-546931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383",
	        "Created": "2024-09-16T10:33:07.830189623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35477,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:33:07.949246182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hostname",
	        "HostsPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/hosts",
	        "LogPath": "/var/lib/docker/containers/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383/481b09cdfdaee57b1d7ed7445eaabff947cee14e33e2c3d33dbddd3a98f82383-json.log",
	        "Name": "/functional-546931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-546931:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-546931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ff5db6b1b37df538e46041512f2ac9aa352e2e7fd16faab0989059bc815d40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-546931",
	                "Source": "/var/lib/docker/volumes/functional-546931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546931",
	                "name.minikube.sigs.k8s.io": "functional-546931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a63c1ddb1b935e3fe8e5ef70fdb0c600197ad5f66a82a23245d6065ac1a636ff",
	            "SandboxKey": "/var/run/docker/netns/a63c1ddb1b93",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c19058e5aabeca0bc30434433d26203e7a45051a16cbafeae207abc5b1915f6c",
	                    "EndpointID": "d06fb1106d7a54a1e55e6e03322a29be01414e698106136216a156a15ae725c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546931",
	                        "481b09cdfdae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546931 -n functional-546931
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs -n 25: (1.448298372s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | -p functional-546931                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh echo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | hello                                                                    |                   |         |         |                     |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port2367125525/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh cat                                                | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | /etc/hostname                                                            |                   |         |         |                     |                     |
	| tunnel    | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel    | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| tunnel    | functional-546931 tunnel                                                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh -- ls                                              | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| license   |                                                                          | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh findmnt                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-546931                                                     | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | --kill=true                                                              |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| ssh       | functional-546931 ssh sudo                                               | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|           | systemctl is-active containerd                                           |                   |         |         |                     |                     |
	| addons    | functional-546931 addons list                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| addons    | functional-546931 addons list                                            | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|           | -o json                                                                  |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:00.918258   48694 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:00.918452   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918475   48694 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:00.918487   48694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.918709   48694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:35:00.919256   48694 out.go:352] Setting JSON to false
	I0916 10:35:00.920662   48694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1041,"bootTime":1726481860,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:00.920778   48694 start.go:139] virtualization: kvm guest
	I0916 10:35:00.924235   48694 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:00.931262   48694 notify.go:220] Checking for updates...
	I0916 10:35:00.931605   48694 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:00.933358   48694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:00.935102   48694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:35:00.936553   48694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:35:00.937907   48694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:00.939153   48694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:00.941266   48694 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:00.942118   48694 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:00.982940   48694 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:35:00.983034   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.072175   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.05984963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.072322   48694 docker.go:318] overlay module found
	I0916 10:35:01.074333   48694 out.go:177] * Using the docker driver based on existing profile
	I0916 10:35:01.075819   48694 start.go:297] selected driver: docker
	I0916 10:35:01.075840   48694 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.075969   48694 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:01.076061   48694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:01.145804   48694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:01.134479908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:01.146698   48694 cni.go:84] Creating CNI manager for ""
	I0916 10:35:01.146754   48694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:35:01.146819   48694 start.go:340] cluster config:
	{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:01.148893   48694 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.140381100Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-695b96c756-5ftj6 Namespace:kubernetes-dashboard ID:02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 UID:9dae2eb0-2710-46a3-b5e1-17d5ee4b9367 NetNS:/var/run/netns/e397a580-a4b9-4dd4-a293-f15a5e318fbb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.140417549Z" level=info msg="Adding pod kubernetes-dashboard_kubernetes-dashboard-695b96c756-5ftj6 to CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.140976182Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=c8d344c5-38f0-4e48-9c6a-485d121fdc8b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.141267607Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=c8d344c5-38f0-4e48-9c6a-485d121fdc8b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.142391220Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ded3ee86-f493-4cfb-aec9-e6f34e50407c name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.150279875Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.151791656Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-695b96c756-5ftj6 Namespace:kubernetes-dashboard ID:02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 UID:9dae2eb0-2710-46a3-b5e1-17d5ee4b9367 NetNS:/var/run/netns/e397a580-a4b9-4dd4-a293-f15a5e318fbb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.151959311Z" level=info msg="Checking pod kubernetes-dashboard_kubernetes-dashboard-695b96c756-5ftj6 for CNI network kindnet (type=ptp)"
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.154117399Z" level=info msg="Ran pod sandbox 02f8caa1ee139f1c0ccf4acabfdd2188180ed00107ff0d36aa89824a6c5bb189 with infra container: kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6/POD" id=ad660a29-463e-4b5b-941d-6518b3b41834 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.155346200Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1dc22da1-f620-47a2-b510-3341b99e3dbf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:03 functional-546931 crio[5663]: time="2024-09-16 10:35:03.155630796Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1dc22da1-f620-47a2-b510-3341b99e3dbf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:04 functional-546931 crio[5663]: time="2024-09-16 10:35:04.199624344Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.550214425Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=ded3ee86-f493-4cfb-aec9-e6f34e50407c name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.550997471Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=8b79152a-1a72-494d-ab68-34c6690e82ae name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.551761856Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c],Size_:43824855,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8b79152a-1a72-494d-ab68-34c6690e82ae name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.552231838Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=36f52340-c482-4672-a901-32512ef4d80e name=/runtime.v1.ImageService/PullImage
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.552560072Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ca91ce83-b530-4139-b7ce-6e4adb36355b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.553449188Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.553527183Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c],Size_:43824855,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=ca91ce83-b530-4139-b7ce-6e4adb36355b name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.554326746Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper" id=98a47449-2f6d-4aa0-95a8-192eaf56a2ad name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.554448824Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.565915372Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e72f8f4f6ca79f0d9f6ed4a6ebc09d72da0e36a4da649d6518503399e365507e/merged/etc/group: no such file or directory"
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.599217946Z" level=info msg="Created container 716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper" id=98a47449-2f6d-4aa0-95a8-192eaf56a2ad name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.599856261Z" level=info msg="Starting container: 716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979" id=de200919-d589-4879-b309-49db1992d297 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:35:06 functional-546931 crio[5663]: time="2024-09-16 10:35:06.605828097Z" level=info msg="Started container" PID=8987 containerID=716706ee816f0966c556fda22405e0f448cb8d6f5ab40607ada67989df45d979 description=kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp/dashboard-metrics-scraper id=de200919-d589-4879-b309-49db1992d297 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03844bf992fc98df6d81bbcc15fb2182753b34df7aabffa7794374eb4e70f936
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED                  STATE               NAME                        ATTEMPT             POD ID              POD
	716706ee816f0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   Less than a second ago   Running             dashboard-metrics-scraper   0                   03844bf992fc9       dashboard-metrics-scraper-c5db448b4-7c2lp
	b8b7b2145f381       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 27 seconds ago           Running             coredns                     2                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	79a9d7528eb3f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 27 seconds ago           Running             kindnet-cni                 2                   4aa3f5aefc537       kindnet-6dtx8
	8b4c53b5f60bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 27 seconds ago           Running             kube-proxy                  2                   f14f9778290af       kube-proxy-kshs9
	1cc14bbfee0f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 27 seconds ago           Running             storage-provisioner         3                   2133c690032da       storage-provisioner
	a27a3ce3a5b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 31 seconds ago           Running             kube-apiserver              0                   af1925dee3fc2       kube-apiserver-functional-546931
	442cc07de2d20       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 31 seconds ago           Running             etcd                        2                   5b3fe285a2416       etcd-functional-546931
	912dea9fa9508       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 31 seconds ago           Running             kube-scheduler              2                   f41f93397a4f0       kube-scheduler-functional-546931
	dd99b58642bf7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 31 seconds ago           Running             kube-controller-manager     2                   878410a4a3694       kube-controller-manager-functional-546931
	a51e8bf1740c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 About a minute ago       Exited              storage-provisioner         2                   2133c690032da       storage-provisioner
	03c9ff61deb56       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 About a minute ago       Exited              kube-scheduler              1                   f41f93397a4f0       kube-scheduler-functional-546931
	500f67fe93de9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 About a minute ago       Exited              coredns                     1                   a8423288f91be       coredns-7c65d6cfc9-wjzzx
	1923f1dc4c46c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 About a minute ago       Exited              etcd                        1                   5b3fe285a2416       etcd-functional-546931
	8578098c4830c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 About a minute ago       Exited              kube-controller-manager     1                   878410a4a3694       kube-controller-manager-functional-546931
	e2626d8943ee8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 About a minute ago       Exited              kindnet-cni                 1                   4aa3f5aefc537       kindnet-6dtx8
	ce7cf09b88b18       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 About a minute ago       Exited              kube-proxy                  1                   f14f9778290af       kube-proxy-kshs9
	
	
	==> coredns [500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32777 - 2477 "HINFO IN 3420670606416057959.5314460485211468677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080961734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8b7b2145f381e934f147b6df3d6f65a4d2722ea152dbc01af28a68128e997eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48590 - 30001 "HINFO IN 6895879156775148846.7943209663817132014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009362696s
	
	
	==> describe nodes <==
	Name:               functional-546931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-546931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_33_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:34:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:34:37 +0000   Mon, 16 Sep 2024 10:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546931
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f68b7ee331b4ad9bbce7c85ad5c1bae
	  System UUID:                b53a3b64-9d61-46d9-a694-0cd93fe258a6
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wjzzx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     100s
	  kube-system                 etcd-functional-546931                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-6dtx8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      100s
	  kube-system                 kube-apiserver-functional-546931             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-functional-546931    200m (2%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-kshs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-functional-546931             100m (1%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-7c2lp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5ftj6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 99s                  kube-proxy       
	  Normal   Starting                 27s                  kube-proxy       
	  Normal   Starting                 73s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     105s                 kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 105s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  105s                 kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s                 kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 105s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           102s                 node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   NodeReady                89s                  kubelet          Node functional-546931 status is now: NodeReady
	  Normal   RegisteredNode           71s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	  Normal   Starting                 33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  32s (x8 over 33s)    kubelet          Node functional-546931 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s (x8 over 33s)    kubelet          Node functional-546931 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s (x7 over 33s)    kubelet          Node functional-546931 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26s                  node-controller  Node functional-546931 event: Registered Node functional-546931 in Controller
	
	
	==> dmesg <==
	[  +0.002592]  #5
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 10:35] FS-Cache: Duplicate cookie detected
	[  +0.005031] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000007485c404{9P.session} n=000000002b39a795
	[  +0.007541] FS-Cache: O-key=[10] '34323935313533303732'
	[  +0.005370] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.006617] FS-Cache: N-cookie d=000000007485c404{9P.session} n=00000000364f9863
	[  +0.008939] FS-Cache: N-key=[10] '34323935313533303732'
	
	
	==> etcd [1923f1dc4c46cac4915a131489306d511a8476f05bccb03de7b11d4e30aa7c54] <==
	{"level":"info","ts":"2024-09-16T10:33:51.496123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:51.496167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.496211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:33:51.497277Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:51.497313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:51.497431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.497494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:51.498556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.498618Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:51.499441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:33:51.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.549372Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:19.549504Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:19.549651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.549778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:19.567753Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:19.567807Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:34:19.570718Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570822Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:19.570856Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-546931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [442cc07de2d20f1858aca970b1589445d9119ae98c169613f5a7a2162fb91a1f] <==
	{"level":"info","ts":"2024-09-16T10:34:35.628722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:35.628909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.629009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.629102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:35.630742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:35.630981Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:35.631046Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.631386Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:35.631405Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:34:36.820902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:36.820996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.821027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-16T10:34:36.824459Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-546931 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:36.824466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:36.824703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.824748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:36.826097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.826340Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:36.827299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:34:36.827338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:35:07 up 17 min,  0 users,  load average: 1.53, 0.67, 0.39
	Linux functional-546931 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [79a9d7528eb3fca6f10a6224728aea01d385814ccafadffcd43797a282fe7e50] <==
	I0916 10:34:39.296704       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:34:39.296992       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:34:39.297141       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:34:39.297157       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:34:39.297188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:34:39.618532       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:34:39.618549       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:34:39.618556       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:34:39.918628       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:34:39.918678       1 metrics.go:61] Registering metrics
	I0916 10:34:39.918760       1 controller.go:374] Syncing nftables rules
	I0916 10:34:49.618556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:49.618661       1 main.go:299] handling current node
	I0916 10:34:59.625424       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:59.625493       1 main.go:299] handling current node
	
	
	==> kindnet [e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e] <==
	I0916 10:33:50.598229       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:50.599351       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:33:50.600449       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:50.600526       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:50.600569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:51.126371       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:33:51.126391       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:33:51.126399       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:33:53.293595       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:33:53.293784       1 metrics.go:61] Registering metrics
	I0916 10:33:53.293935       1 controller.go:374] Syncing nftables rules
	I0916 10:34:01.126660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:01.126723       1 main.go:299] handling current node
	I0916 10:34:11.131420       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:11.131464       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a27a3ce3a5b44b4d7dfa94c04f9b5d3a9df2035f73f12a33181af17c65130c46] <==
	I0916 10:34:37.906296       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:37.906350       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:37.906357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:34:37.906380       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:37.906400       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:37.906408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:37.906414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:37.908814       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:37.908932       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:37.908950       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:37.912871       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:34:37.916515       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:37.923754       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:38.812624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:39.678850       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:39.868256       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:39.879574       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:39.941085       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:39.947167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:34:56.583902       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:02.580292       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:35:02.631388       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:35:02.925711       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.155.226"}
	I0916 10:35:02.995387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:03.006863       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.172.127"}
	
	
	==> kube-controller-manager [8578098c4830c5f7f5d59fd0bf1ae71061a40b0292d2304987ae275f1228db0b] <==
	I0916 10:33:56.401158       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:33:56.401164       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:33:56.401172       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:33:56.401277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-546931"
	I0916 10:33:56.403349       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:33:56.403423       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:33:56.403506       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546931"
	I0916 10:33:56.403561       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:33:56.513024       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:33:56.541883       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:33:56.542896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:33:56.544059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:33:56.544137       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:33:56.544141       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:33:56.548517       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.583700       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:33:56.600343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:56.606853       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:33:56.702066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.654324ms"
	I0916 10:33:56.702225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.375µs"
	I0916 10:33:57.010557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:57.042413       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:58.552447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.544591ms"
	I0916 10:33:58.552540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.665µs"
	
	
	==> kube-controller-manager [dd99b58642bf7eb44b7455752a1b25ad758e6d5c63ee32949852dcef8026edae] <==
	I0916 10:34:41.425963       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:34:41.426047       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:34:41.446292       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:34:41.449685       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:34:41.455194       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:41.866377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951654       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:41.951690       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:02.709012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.757176ms"
	E0916 10:35:02.709065       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.709405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.008906ms"
	E0916 10:35:02.709504       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.642122ms"
	E0916 10:35:02.720256       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.720217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.567387ms"
	E0916 10:35:02.720286       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:35:02.803923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="82.487261ms"
	I0916 10:35:02.817637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.173054ms"
	I0916 10:35:02.898157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="79.487998ms"
	I0916 10:35:02.898365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="73.07µs"
	I0916 10:35:02.908590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="104.542603ms"
	I0916 10:35:02.908674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="37.49µs"
	I0916 10:35:02.908825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="342.16µs"
	I0916 10:35:06.827697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.271508ms"
	I0916 10:35:06.827804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="51.907µs"
	
	
	==> kube-proxy [8b4c53b5f60bc708297acdf22e1b2ad82c81b2e016c22584bc1f44385414492d] <==
	I0916 10:34:39.218200       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:34:39.331180       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:34:39.331273       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:39.352386       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:34:39.352459       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:39.354438       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:39.354816       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:39.354852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:39.355965       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:39.355967       1 config.go:199] "Starting service config controller"
	I0916 10:34:39.356016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:39.356018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:39.356050       1 config.go:328] "Starting node config controller"
	I0916 10:34:39.356062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:39.456934       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:34:39.456969       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:39.456979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b] <==
	I0916 10:33:50.617128       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:33:53.201354       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:33:53.201554       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:53.314988       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:33:53.315060       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:53.318944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:53.319862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:53.319904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.321510       1 config.go:199] "Starting service config controller"
	I0916 10:33:53.321547       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:53.321583       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:53.321592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:53.322001       1 config.go:328] "Starting node config controller"
	I0916 10:33:53.322360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:53.421890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:53.421914       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:53.422563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [03c9ff61deb562bff65aed9fa58a6b016e06aca0192ae59536c22b467bb9de8a] <==
	I0916 10:33:51.925005       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:33:53.094343       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:33:53.094399       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:33:53.094414       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:33:53.094424       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:33:53.205695       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:33:53.205808       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:53.208746       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:33:53.208879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:33:53.208938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:33:53.208906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:33:53.309785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:19.550098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:34:19.550186       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:34:19.550394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [912dea9fa95088e76fc67e62800091be16d7f78ce4aebdd582e9645601d028f5] <==
	I0916 10:34:36.496922       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:37.813654       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:37.814327       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:37.814409       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:37.814446       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:37.907304       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:37.907329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.909440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:37.909504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:37.909560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:37.909610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:38.010226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.813147    6025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-546931" podStartSLOduration=0.813122953 podStartE2EDuration="813.122953ms" podCreationTimestamp="2024-09-16 10:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:34:38.813088977 +0000 UTC m=+4.214337302" watchObservedRunningTime="2024-09-16 10:34:38.813122953 +0000 UTC m=+4.214371278"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827241    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-xtables-lock\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827385    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-cni-cfg\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827427    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a7e94614-567e-47ba-a51a-426f09198dba-tmp\") pod \"storage-provisioner\" (UID: \"a7e94614-567e-47ba-a51a-426f09198dba\") " pod="kube-system/storage-provisioner"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827500    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44bb424a-c279-467b-9256-64be125798f9-lib-modules\") pod \"kindnet-6dtx8\" (UID: \"44bb424a-c279-467b-9256-64be125798f9\") " pod="kube-system/kindnet-6dtx8"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827526    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-xtables-lock\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.827581    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2a1ef0a-22f5-4b04-a7fe-30e019b2687b-lib-modules\") pod \"kube-proxy-kshs9\" (UID: \"c2a1ef0a-22f5-4b04-a7fe-30e019b2687b\") " pod="kube-system/kube-proxy-kshs9"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999207    6025 scope.go:117] "RemoveContainer" containerID="500f67fe93de9a8cdf9a253e0e8e1679a5bd41851d481bc88011d3a13340b7af"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999378    6025 scope.go:117] "RemoveContainer" containerID="ce7cf09b88b18ec9a3aedc13c1a7748a56c52551fb578e351ab71ab67c232b8b"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999500    6025 scope.go:117] "RemoveContainer" containerID="e2626d8943ee8beaea49f2b23d15e1067da25a18b4a44debc92d42920d43e65e"
	Sep 16 10:34:38 functional-546931 kubelet[6025]: I0916 10:34:38.999567    6025 scope.go:117] "RemoveContainer" containerID="a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b"
	Sep 16 10:34:40 functional-546931 kubelet[6025]: I0916 10:34:40.708631    6025 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" path="/var/lib/kubelet/pods/eb02afa85fe4b42d87b2f90fa03a9ee4/volumes"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810256    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:44 functional-546931 kubelet[6025]: E0916 10:34:44.810297    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482884810036474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811575    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:34:54 functional-546931 kubelet[6025]: E0916 10:34:54.811622    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482894811390459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: E0916 10:35:02.803189    6025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.803256    6025 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb02afa85fe4b42d87b2f90fa03a9ee4" containerName="kube-apiserver"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900493    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900565    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8a97415-7eb6-4d52-99c2-916e38eb0960-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900597    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4skd\" (UniqueName: \"kubernetes.io/projected/9dae2eb0-2710-46a3-b5e1-17d5ee4b9367-kube-api-access-d4skd\") pod \"kubernetes-dashboard-695b96c756-5ftj6\" (UID: \"9dae2eb0-2710-46a3-b5e1-17d5ee4b9367\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5ftj6"
	Sep 16 10:35:02 functional-546931 kubelet[6025]: I0916 10:35:02.900646    6025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq9v\" (UniqueName: \"kubernetes.io/projected/e8a97415-7eb6-4d52-99c2-916e38eb0960-kube-api-access-nmq9v\") pod \"dashboard-metrics-scraper-c5db448b4-7c2lp\" (UID: \"e8a97415-7eb6-4d52-99c2-916e38eb0960\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-7c2lp"
	Sep 16 10:35:03 functional-546931 kubelet[6025]: I0916 10:35:03.009620    6025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.812961    6025 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:04 functional-546931 kubelet[6025]: E0916 10:35:04.813005    6025 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482904812714013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1cc14bbfee0f559cf50961c0d3e5b8ede8af354adeaf238bd11e4ba944440397] <==
	I0916 10:34:39.127159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:39.136475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:39.136516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:56.587879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:56.587950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342 became leader
	I0916 10:34:56.588053       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	I0916 10:34:56.688953       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-546931_bccfd185-48a8-4914-9eb9-92f6b7c18342!
	
	
	==> storage-provisioner [a51e8bf1740c3c343f99325c544684427ec253b50dc26046f18aa8d25aaa7a8b] <==
	I0916 10:34:02.111528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:02.120479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:02.120525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:34:19.534445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:34:19.534594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae!
	I0916 10:34:19.534583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc246147-2d82-4572-9c07-a6821bde6d8c", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546931_da727940-4201-4a48-9cb2-fb459cdd04ae became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546931 -n functional-546931
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (475.441µs)
helpers_test.go:263: kubectl --context functional-546931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/NodeLabels (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-546931 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1439: (dbg) Non-zero exit: kubectl --context functional-546931 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (450.346µs)
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-546931 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 service list
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"|-------------|------------|--------------|-----|\n|  NAMESPACE  |    NAME    | TARGET PORT  | URL |\n|-------------|------------|--------------|-----|\n| default     | kubernetes | No node port |     |\n| kube-system | kube-dns   | No node port |     |\n|-------------|------------|--------------|-----|\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdany-port811277425/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726482898825665135" to /tmp/TestFunctionalparallelMountCmdany-port811277425/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726482898825665135" to /tmp/TestFunctionalparallelMountCmdany-port811277425/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726482898825665135" to /tmp/TestFunctionalparallelMountCmdany-port811277425/001/test-1726482898825665135
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.841819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 10:34 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 10:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 10:34 test-1726482898825665135
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh cat /mount-9p/test-1726482898825665135
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-546931 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-546931 replace --force -f testdata/busybox-mount-test.yaml: fork/exec /usr/local/bin/kubectl: exec format error (462.172µs)
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-546931 replace --force -f testdata/busybox-mount-test.yaml" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (377.505507ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=999,access=any,msize=262144,trans=tcp,noextend,port=40725)
	total 2
	-rw-r--r-- 1 docker docker 24 Sep 16 10:34 created-by-test
	-rw-r--r-- 1 docker docker 24 Sep 16 10:34 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Sep 16 10:34 test-1726482898825665135
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-546931 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdany-port811277425/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdany-port811277425/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port811277425/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40725
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port811277425/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdany-port811277425/001:/mount-9p --alsologtostderr -v=1] stderr:
I0916 10:34:58.894900   46698 out.go:345] Setting OutFile to fd 1 ...
I0916 10:34:58.895089   46698 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:34:58.895110   46698 out.go:358] Setting ErrFile to fd 2...
I0916 10:34:58.895119   46698 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:34:58.895337   46698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
I0916 10:34:58.895593   46698 mustload.go:65] Loading cluster: functional-546931
I0916 10:34:58.895969   46698 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:34:58.896427   46698 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:34:58.917582   46698 host.go:66] Checking if "functional-546931" exists ...
I0916 10:34:58.917910   46698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0916 10:34:59.010384   46698 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:34:58.996981208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0916 10:34:59.010558   46698 cli_runner.go:164] Run: docker network inspect functional-546931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0916 10:34:59.035351   46698 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port811277425/001 into VM as /mount-9p ...
I0916 10:34:59.036804   46698 out.go:177]   - Mount type:   9p
I0916 10:34:59.038260   46698 out.go:177]   - User ID:      docker
I0916 10:34:59.040209   46698 out.go:177]   - Group ID:     docker
I0916 10:34:59.041609   46698 out.go:177]   - Version:      9p2000.L
I0916 10:34:59.042981   46698 out.go:177]   - Message Size: 262144
I0916 10:34:59.045684   46698 out.go:177]   - Options:      map[]
I0916 10:34:59.047262   46698 out.go:177]   - Bind Address: 192.168.49.1:40725
I0916 10:34:59.048598   46698 out.go:177] * Userspace file server: 
I0916 10:34:59.049721   46698 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0916 10:34:59.049804   46698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:34:59.070319   46698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:34:59.175542   46698 mount.go:180] unmount for /mount-9p ran successfully
I0916 10:34:59.175587   46698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0916 10:34:59.190263   46698 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40725,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I0916 10:34:59.233660   46698 main.go:125] stdlog: ufs.go:141 connected
I0916 10:34:59.233858   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tversion tag 65535 msize 262144 version '9P2000.L'
I0916 10:34:59.233911   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rversion tag 65535 msize 262144 version '9P2000'
I0916 10:34:59.234259   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0916 10:34:59.234321   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rattach tag 0 aqid (20fa071 fa665f85 'd')
I0916 10:34:59.234600   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 0
I0916 10:34:59.234779   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa071 fa665f85 'd') m d775 at 0 mt 1726482898 l 4096 t 0 d 0 ext )
I0916 10:34:59.239508   46698 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/.mount-process: {Name:mk9344465bd8d59555ed054264efa1c3f53a7a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0916 10:34:59.239708   46698 mount.go:105] mount successful: ""
I0916 10:34:59.242695   46698 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port811277425/001 to /mount-9p
I0916 10:34:59.244274   46698 out.go:201] 
I0916 10:34:59.245557   46698 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0916 10:35:00.205765   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 0
I0916 10:35:00.205910   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa071 fa665f85 'd') m d775 at 0 mt 1726482898 l 4096 t 0 d 0 ext )
I0916 10:35:00.206248   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 1 
I0916 10:35:00.206298   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 
I0916 10:35:00.206418   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Topen tag 0 fid 1 mode 0
I0916 10:35:00.206502   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Ropen tag 0 qid (20fa071 fa665f85 'd') iounit 0
I0916 10:35:00.206621   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 0
I0916 10:35:00.206728   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa071 fa665f85 'd') m d775 at 0 mt 1726482898 l 4096 t 0 d 0 ext )
I0916 10:35:00.206898   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 0 count 262120
I0916 10:35:00.207079   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 258
I0916 10:35:00.207185   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 258 count 261862
I0916 10:35:00.207223   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.207401   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:35:00.207427   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.207561   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0916 10:35:00.207600   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 (20fa073 fa665f85 '') 
I0916 10:35:00.207738   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.207834   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa073 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.208818   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.208955   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa073 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.209132   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 2
I0916 10:35:00.209165   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.209358   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 2 0:'test-1726482898825665135' 
I0916 10:35:00.209409   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 (20fa074 fa665f85 '') 
I0916 10:35:00.209686   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.209775   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('test-1726482898825665135' 'jenkins' 'balintp' '' q (20fa074 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.211458   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.211600   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('test-1726482898825665135' 'jenkins' 'balintp' '' q (20fa074 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.211818   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 2
I0916 10:35:00.211861   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.212061   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0916 10:35:00.212113   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 (20fa072 fa665f85 '') 
I0916 10:35:00.212255   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.212359   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa072 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.212630   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.212758   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa072 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.213047   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 2
I0916 10:35:00.213097   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.213324   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:35:00.213390   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.213695   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 1
I0916 10:35:00.213836   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.542873   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 1 0:'test-1726482898825665135' 
I0916 10:35:00.542953   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 (20fa074 fa665f85 '') 
I0916 10:35:00.543148   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 1
I0916 10:35:00.543270   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('test-1726482898825665135' 'jenkins' 'balintp' '' q (20fa074 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.543449   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 1 newfid 2 
I0916 10:35:00.543501   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 
I0916 10:35:00.543654   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Topen tag 0 fid 2 mode 0
I0916 10:35:00.543704   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Ropen tag 0 qid (20fa074 fa665f85 '') iounit 0
I0916 10:35:00.543825   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 1
I0916 10:35:00.543925   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('test-1726482898825665135' 'jenkins' 'balintp' '' q (20fa074 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.544170   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 2 offset 0 count 262120
I0916 10:35:00.544264   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 24
I0916 10:35:00.544523   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 2 offset 24 count 262120
I0916 10:35:00.544584   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.544763   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 2 offset 24 count 262120
I0916 10:35:00.544802   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.544948   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 2
I0916 10:35:00.545041   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.545154   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 1
I0916 10:35:00.545189   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.920705   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 0
I0916 10:35:00.920876   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa071 fa665f85 'd') m d775 at 0 mt 1726482898 l 4096 t 0 d 0 ext )
I0916 10:35:00.921292   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 1 
I0916 10:35:00.921370   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 
I0916 10:35:00.921537   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Topen tag 0 fid 1 mode 0
I0916 10:35:00.921609   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Ropen tag 0 qid (20fa071 fa665f85 'd') iounit 0
I0916 10:35:00.921772   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 0
I0916 10:35:00.921886   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa071 fa665f85 'd') m d775 at 0 mt 1726482898 l 4096 t 0 d 0 ext )
I0916 10:35:00.922145   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 0 count 262120
I0916 10:35:00.922324   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 258
I0916 10:35:00.922459   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 258 count 261862
I0916 10:35:00.922498   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.922618   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:35:00.922656   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.922784   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0916 10:35:00.922825   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 (20fa073 fa665f85 '') 
I0916 10:35:00.922927   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.923030   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa073 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.923162   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.923259   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa073 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.923416   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 2
I0916 10:35:00.923447   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.923579   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 2 0:'test-1726482898825665135' 
I0916 10:35:00.923628   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 (20fa074 fa665f85 '') 
I0916 10:35:00.923787   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.923900   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('test-1726482898825665135' 'jenkins' 'balintp' '' q (20fa074 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.924041   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.924139   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('test-1726482898825665135' 'jenkins' 'balintp' '' q (20fa074 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.924298   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 2
I0916 10:35:00.924338   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.924470   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0916 10:35:00.924517   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rwalk tag 0 (20fa072 fa665f85 '') 
I0916 10:35:00.924649   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.924741   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa072 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.924873   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tstat tag 0 fid 2
I0916 10:35:00.924985   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa072 fa665f85 '') m 644 at 0 mt 1726482898 l 24 t 0 d 0 ext )
I0916 10:35:00.925115   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 2
I0916 10:35:00.925147   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.925273   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:35:00.925308   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rread tag 0 count 0
I0916 10:35:00.927620   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 1
I0916 10:35:00.927663   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:00.930251   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0916 10:35:00.930315   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rerror tag 0 ename 'file not found' ecode 0
I0916 10:35:01.298188   46698 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52106 Tclunk tag 0 fid 0
I0916 10:35:01.298229   46698 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52106 Rclunk tag 0
I0916 10:35:01.298605   46698 main.go:125] stdlog: ufs.go:147 disconnected
I0916 10:35:01.316818   46698 out.go:177] * Unmounting /mount-9p ...
I0916 10:35:01.318445   46698 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0916 10:35:01.325549   46698 mount.go:180] unmount for /mount-9p ran successfully
I0916 10:35:01.325664   46698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/.mount-process: {Name:mk9344465bd8d59555ed054264efa1c3f53a7a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0916 10:35:01.327466   46698 out.go:201] 
W0916 10:35:01.328876   46698 out.go:270] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0916 10:35:01.330191   46698 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 service list -o json
functional_test.go:1494: Took "408.560652ms" to run "out/minikube-linux-amd64 -p functional-546931 service list -o json"
functional_test.go:1498: expected the json of 'service list' to include "hello-node" but got *"[{\"Namespace\":\"default\",\"Name\":\"kubernetes\",\"URLs\":[],\"PortNames\":[\"No node port\"]},{\"Namespace\":\"kube-system\",\"Name\":\"kube-dns\",\"URLs\":[],\"PortNames\":[\"No node port\"]}]"*. args: "out/minikube-linux-amd64 -p functional-546931 service list -o json"
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 service --namespace=default --https --url hello-node: exit status 115 (345.708409ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1511: failed to get service url. args "out/minikube-linux-amd64 -p functional-546931 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 service hello-node --url --format={{.IP}}: exit status 115 (338.523017ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-546931 service hello-node --url --format={{.IP}}": exit status 115
functional_test.go:1548: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 service hello-node --url: exit status 115 (390.257677ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1561: failed to get service url. args: "out/minikube-linux-amd64 -p functional-546931 service hello-node --url": exit status 115
functional_test.go:1565: found endpoint for hello-node: 
functional_test.go:1573: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-546931 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-546931 apply -f testdata/testsvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (441.468µs)
functional_test_tunnel_test.go:214: kubectl --context functional-546931 apply -f testdata/testsvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (114.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-546931 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-546931 get svc nginx-svc: fork/exec /usr/local/bin/kubectl: exec format error (518.036µs)
functional_test_tunnel_test.go:292: kubectl --context functional-546931 get svc nginx-svc failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (114.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (2.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-107957 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-107957 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": fork/exec /usr/local/bin/kubectl: exec format error (488.993µs)
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-107957 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": fork/exec /usr/local/bin/kubectl: exec format error
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-107957 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-107957
helpers_test.go:235: (dbg) docker inspect ha-107957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd",
	        "Created": "2024-09-16T10:37:05.006225665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 58964,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:37:05.118823416Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hosts",
	        "LogPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd-json.log",
	        "Name": "/ha-107957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-107957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-107957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-107957",
	                "Source": "/var/lib/docker/volumes/ha-107957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-107957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-107957",
	                "name.minikube.sigs.k8s.io": "ha-107957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1596d8f3a177074ac09c8b8ac92b313e5c035ff2701330f9d1b9b910d34ca9b",
	            "SandboxKey": "/var/run/docker/netns/f1596d8f3a17",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-107957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1162a04f8fb0eca4f56c515332b1b6b72501106e380521da303a5999505b78f5",
	                    "EndpointID": "6fab7b78e88e07ed9e169eb5c488f69225a0919e60c622ad643d4f3c5da0293c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-107957",
	                        "8934c54a2cf0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-107957 -n ha-107957
helpers_test.go:244: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 logs -n 25: (1.229057202s)
helpers_test.go:252: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-546931                    | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | image ls --format table              |                   |         |         |                     |                     |
	|         | --alsologtostderr                    |                   |         |         |                     |                     |
	| image   | functional-546931 image ls           | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| delete  | -p functional-546931                 | functional-546931 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| start   | -p ha-107957 --wait=true             | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:39 UTC |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=docker                      |                   |         |         |                     |                     |
	|         | --container-runtime=crio             |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- apply -f             | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- rollout status       | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- get pods -o          | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- get pods -o          | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-4rfjs --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-m2jh6 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-plmdj --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-4rfjs --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-m2jh6 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-plmdj --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-4rfjs -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-m2jh6 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-plmdj -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- get pods -o          | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-4rfjs              |                   |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-4rfjs -- sh        |                   |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1            |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-m2jh6              |                   |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-m2jh6 -- sh        |                   |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1            |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-plmdj              |                   |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl | -p ha-107957 -- exec                 | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|         | busybox-7dff88458-plmdj -- sh        |                   |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1            |                   |         |         |                     |                     |
	| node    | add -p ha-107957 -v=7                | ha-107957         | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:40 UTC |
	|         | --alsologtostderr                    |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:36:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:36:59.603398   58299 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:59.603689   58299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:59.603701   58299 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:59.603706   58299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:59.603926   58299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:36:59.604506   58299 out.go:352] Setting JSON to false
	I0916 10:36:59.605423   58299 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1160,"bootTime":1726481860,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:36:59.605545   58299 start.go:139] virtualization: kvm guest
	I0916 10:36:59.607783   58299 out.go:177] * [ha-107957] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:36:59.609154   58299 notify.go:220] Checking for updates...
	I0916 10:36:59.609171   58299 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:36:59.610814   58299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:59.612398   58299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:36:59.613838   58299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:36:59.615490   58299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:36:59.617049   58299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:59.618738   58299 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:59.642219   58299 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:36:59.642367   58299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:36:59.695784   58299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:36:59.683210757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:36:59.695892   58299 docker.go:318] overlay module found
	I0916 10:36:59.697854   58299 out.go:177] * Using the docker driver based on user configuration
	I0916 10:36:59.699133   58299 start.go:297] selected driver: docker
	I0916 10:36:59.699150   58299 start.go:901] validating driver "docker" against <nil>
	I0916 10:36:59.699162   58299 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:59.699956   58299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:36:59.752267   58299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:36:59.740856159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:36:59.752512   58299 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:36:59.752832   58299 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:36:59.754857   58299 out.go:177] * Using Docker driver with root privileges
	I0916 10:36:59.756598   58299 cni.go:84] Creating CNI manager for ""
	I0916 10:36:59.756649   58299 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:36:59.756662   58299 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:36:59.756765   58299 start.go:340] cluster config:
	{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:59.758448   58299 out.go:177] * Starting "ha-107957" primary control-plane node in "ha-107957" cluster
	I0916 10:36:59.759759   58299 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:36:59.761144   58299 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:36:59.762275   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:36:59.762316   58299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:36:59.762325   58299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:36:59.762441   58299 cache.go:56] Caching tarball of preloaded images
	I0916 10:36:59.762548   58299 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:36:59.762566   58299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:36:59.763017   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:36:59.763050   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json: {Name:mkc6efad42d7e4a853da28912b65bbd6a7d5e70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:36:59.783435   58299 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:36:59.783453   58299 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:36:59.783516   58299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:36:59.783530   58299 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:36:59.783534   58299 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:36:59.783541   58299 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:36:59.783546   58299 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:36:59.784658   58299 image.go:273] response: 
	I0916 10:36:59.844896   58299 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:36:59.844957   58299 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:36:59.844993   58299 start.go:360] acquireMachinesLock for ha-107957: {Name:mkd47d2ce5dbb0c6b4cd5ea9479cc8820c855026 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:36:59.845117   58299 start.go:364] duration metric: took 103.785µs to acquireMachinesLock for "ha-107957"
	I0916 10:36:59.845144   58299 start.go:93] Provisioning new machine with config: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:36:59.845216   58299 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:36:59.847363   58299 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:36:59.847606   58299 start.go:159] libmachine.API.Create for "ha-107957" (driver="docker")
	I0916 10:36:59.847632   58299 client.go:168] LocalClient.Create starting
	I0916 10:36:59.847693   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:36:59.847724   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:36:59.847736   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:36:59.847777   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:36:59.847798   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:36:59.847808   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:36:59.848117   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:36:59.866348   58299 cli_runner.go:211] docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:36:59.866437   58299 network_create.go:284] running [docker network inspect ha-107957] to gather additional debugging logs...
	I0916 10:36:59.866458   58299 cli_runner.go:164] Run: docker network inspect ha-107957
	W0916 10:36:59.884107   58299 cli_runner.go:211] docker network inspect ha-107957 returned with exit code 1
	I0916 10:36:59.884149   58299 network_create.go:287] error running [docker network inspect ha-107957]: docker network inspect ha-107957: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-107957 not found
	I0916 10:36:59.884164   58299 network_create.go:289] output of [docker network inspect ha-107957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-107957 not found
	
	** /stderr **
	I0916 10:36:59.884296   58299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:36:59.902341   58299 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c8c7b0}
	I0916 10:36:59.902396   58299 network_create.go:124] attempt to create docker network ha-107957 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:36:59.902454   58299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-107957 ha-107957
	I0916 10:36:59.966916   58299 network_create.go:108] docker network ha-107957 192.168.49.0/24 created
	I0916 10:36:59.966962   58299 kic.go:121] calculated static IP "192.168.49.2" for the "ha-107957" container
	I0916 10:36:59.967037   58299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:36:59.983709   58299 cli_runner.go:164] Run: docker volume create ha-107957 --label name.minikube.sigs.k8s.io=ha-107957 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:37:00.007615   58299 oci.go:103] Successfully created a docker volume ha-107957
	I0916 10:37:00.007698   58299 cli_runner.go:164] Run: docker run --rm --name ha-107957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957 --entrypoint /usr/bin/test -v ha-107957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:37:00.506153   58299 oci.go:107] Successfully prepared a docker volume ha-107957
	I0916 10:37:00.506208   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:00.506231   58299 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:37:00.506290   58299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:37:04.940269   58299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.433935277s)
	I0916 10:37:04.940305   58299 kic.go:203] duration metric: took 4.434070761s to extract preloaded images to volume ...
	W0916 10:37:04.940441   58299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:37:04.940563   58299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:37:04.990735   58299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-107957 --name ha-107957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-107957 --network ha-107957 --ip 192.168.49.2 --volume ha-107957:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:37:05.296263   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Running}}
	I0916 10:37:05.314573   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:05.333626   58299 cli_runner.go:164] Run: docker exec ha-107957 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:37:05.375828   58299 oci.go:144] the created container "ha-107957" has a running status.
	I0916 10:37:05.375871   58299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa...
	I0916 10:37:05.604964   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:37:05.605006   58299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:37:05.630238   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:05.652100   58299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:37:05.652120   58299 kic_runner.go:114] Args: [docker exec --privileged ha-107957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:37:05.707244   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:05.730489   58299 machine.go:93] provisionDockerMachine start ...
	I0916 10:37:05.730581   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:05.753671   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:05.753962   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:05.753981   58299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:37:05.952786   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:37:05.952829   58299 ubuntu.go:169] provisioning hostname "ha-107957"
	I0916 10:37:05.952915   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:05.971519   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:05.971759   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:05.971777   58299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957 && echo "ha-107957" | sudo tee /etc/hostname
	I0916 10:37:06.119572   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:37:06.119642   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.136270   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:06.136466   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:06.136489   58299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:37:06.265213   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:37:06.265242   58299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:37:06.265288   58299 ubuntu.go:177] setting up certificates
	I0916 10:37:06.265302   58299 provision.go:84] configureAuth start
	I0916 10:37:06.265385   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:37:06.281894   58299 provision.go:143] copyHostCerts
	I0916 10:37:06.281948   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:06.281984   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:37:06.281996   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:06.282069   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:37:06.282152   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:06.282173   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:37:06.282181   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:06.282208   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:37:06.282261   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:06.282281   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:37:06.282289   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:06.282313   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:37:06.282376   58299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957 san=[127.0.0.1 192.168.49.2 ha-107957 localhost minikube]
	I0916 10:37:06.439846   58299 provision.go:177] copyRemoteCerts
	I0916 10:37:06.439906   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:37:06.439942   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.456642   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:06.549647   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:37:06.549713   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:37:06.570805   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:37:06.570876   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:37:06.592035   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:37:06.592101   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:37:06.613074   58299 provision.go:87] duration metric: took 347.754949ms to configureAuth
	I0916 10:37:06.613106   58299 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:37:06.613293   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:06.613428   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.630199   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:06.630409   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:06.630427   58299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:37:06.847080   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:37:06.847108   58299 machine.go:96] duration metric: took 1.116591163s to provisionDockerMachine
	I0916 10:37:06.847121   58299 client.go:171] duration metric: took 6.999482958s to LocalClient.Create
	I0916 10:37:06.847136   58299 start.go:167] duration metric: took 6.999530723s to libmachine.API.Create "ha-107957"
	I0916 10:37:06.847145   58299 start.go:293] postStartSetup for "ha-107957" (driver="docker")
	I0916 10:37:06.847162   58299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:37:06.847232   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:37:06.847272   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.864290   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:06.958605   58299 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:37:06.961800   58299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:37:06.961830   58299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:37:06.961838   58299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:37:06.961844   58299 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:37:06.961854   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:37:06.961911   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:37:06.961991   58299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:37:06.962000   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:37:06.962091   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:37:06.970311   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:06.992153   58299 start.go:296] duration metric: took 144.993123ms for postStartSetup
	I0916 10:37:06.992514   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:37:07.010019   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:37:07.010320   58299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:37:07.010374   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:07.027342   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:07.118196   58299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:37:07.122812   58299 start.go:128] duration metric: took 7.277582674s to createHost
	I0916 10:37:07.122838   58299 start.go:83] releasing machines lock for "ha-107957", held for 7.277707937s
	I0916 10:37:07.122897   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:37:07.139939   58299 ssh_runner.go:195] Run: cat /version.json
	I0916 10:37:07.139963   58299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:37:07.139988   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:07.140039   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:07.157654   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:07.157822   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:07.248853   58299 ssh_runner.go:195] Run: systemctl --version
	I0916 10:37:07.327017   58299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:37:07.463377   58299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:37:07.467690   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:07.485312   58299 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:37:07.485399   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:07.511852   58299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:37:07.511876   58299 start.go:495] detecting cgroup driver to use...
	I0916 10:37:07.511915   58299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:37:07.511971   58299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:37:07.525710   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:37:07.536183   58299 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:37:07.536255   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:37:07.548767   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:37:07.561803   58299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:37:07.636189   58299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:37:07.720664   58299 docker.go:233] disabling docker service ...
	I0916 10:37:07.720733   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:37:07.739328   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:37:07.749960   58299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:37:07.828562   58299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:37:07.908170   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:37:07.918586   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:37:07.933088   58299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:37:07.933141   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.942185   58299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:37:07.942257   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.951755   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.960742   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.970406   58299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:37:07.979105   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.988477   58299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:08.003007   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:08.011742   58299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:37:08.019640   58299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:37:08.027221   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:08.098376   58299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:37:08.192013   58299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:37:08.192079   58299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:37:08.195597   58299 start.go:563] Will wait 60s for crictl version
	I0916 10:37:08.195647   58299 ssh_runner.go:195] Run: which crictl
	I0916 10:37:08.198745   58299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:37:08.229778   58299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:37:08.229860   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:08.262707   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:08.298338   58299 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:37:08.299827   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:37:08.316399   58299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:37:08.319895   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:08.330759   58299 kubeadm.go:883] updating cluster {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:37:08.330882   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:08.330935   58299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:37:08.392137   58299 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:37:08.392166   58299 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:37:08.392230   58299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:37:08.423229   58299 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:37:08.423250   58299 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:37:08.423257   58299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:37:08.423338   58299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:37:08.423398   58299 ssh_runner.go:195] Run: crio config
	I0916 10:37:08.463060   58299 cni.go:84] Creating CNI manager for ""
	I0916 10:37:08.463079   58299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:37:08.463090   58299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:37:08.463109   58299 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-107957 NodeName:ha-107957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:37:08.463248   58299 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-107957"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:37:08.463271   58299 kube-vip.go:115] generating kube-vip config ...
	I0916 10:37:08.463309   58299 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:37:08.474407   58299 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:37:08.474508   58299 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:37:08.474558   58299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:37:08.482233   58299 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:37:08.482294   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:37:08.489911   58299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0916 10:37:08.505379   58299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:37:08.523137   58299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0916 10:37:08.539035   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 10:37:08.555396   58299 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:37:08.558912   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:08.569471   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:08.643150   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:37:08.655259   58299 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.2
	I0916 10:37:08.655281   58299 certs.go:194] generating shared ca certs ...
	I0916 10:37:08.655302   58299 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.655465   58299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:37:08.655513   58299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:37:08.655526   58299 certs.go:256] generating profile certs ...
	I0916 10:37:08.655584   58299 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:37:08.655612   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt with IP's: []
	I0916 10:37:08.751754   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt ...
	I0916 10:37:08.751786   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt: {Name:mk3ab8542401b8617feb30dcb924978b7ec3a34d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.751954   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key ...
	I0916 10:37:08.751965   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key: {Name:mkc20b79a2c080fec017a4b392198b3d6dc3a922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.752038   58299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3
	I0916 10:37:08.752052   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 10:37:08.885097   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3 ...
	I0916 10:37:08.885134   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3: {Name:mk051112b9fba334b7ed02cba0916716ba024ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.885387   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3 ...
	I0916 10:37:08.885408   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3: {Name:mke96f86839b0890c15fe3dd30fc968634547331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.885544   58299 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:37:08.885793   58299 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:37:08.885880   58299 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:37:08.885904   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt with IP's: []
	I0916 10:37:08.936312   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt ...
	I0916 10:37:08.936345   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt: {Name:mk2a951a04a3eac4ee0442d03ef1c1850492250e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.936526   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key ...
	I0916 10:37:08.936545   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key: {Name:mkdf74098b886a4bb48cd3af60493afa29ff1d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.936653   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:37:08.936672   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:37:08.936685   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:37:08.936703   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:37:08.936721   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:37:08.936738   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:37:08.936751   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:37:08.936763   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:37:08.936827   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:37:08.936872   58299 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:37:08.936887   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:37:08.936925   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:37:08.936954   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:37:08.936988   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:37:08.937038   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:08.937089   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:37:08.937111   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:08.937129   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:37:08.937799   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:37:08.959998   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:37:08.982088   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:37:09.004374   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:37:09.025961   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:37:09.046778   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:37:09.068289   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:37:09.090319   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:37:09.112963   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:37:09.134652   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:37:09.156586   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:37:09.178427   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:37:09.194956   58299 ssh_runner.go:195] Run: openssl version
	I0916 10:37:09.199963   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:37:09.208595   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:09.211731   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:09.211780   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:09.217946   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:37:09.226591   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:37:09.235060   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:37:09.238288   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:37:09.238346   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:37:09.244671   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:37:09.252760   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:37:09.261123   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:37:09.264330   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:37:09.264391   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:37:09.270558   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:37:09.279259   58299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:37:09.282271   58299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:37:09.282328   58299 kubeadm.go:392] StartCluster: {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:37:09.282415   58299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:37:09.282463   58299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:37:09.314769   58299 cri.go:89] found id: ""
	I0916 10:37:09.314842   58299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:37:09.322996   58299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:37:09.331095   58299 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:37:09.331144   58299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:37:09.339004   58299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:37:09.339022   58299 kubeadm.go:157] found existing configuration files:
	
	I0916 10:37:09.339060   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:37:09.346697   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:37:09.346759   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:37:09.354349   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:37:09.362136   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:37:09.362191   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:37:09.370301   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:37:09.378467   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:37:09.378518   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:37:09.386401   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:37:09.394661   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:37:09.394712   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:37:09.402526   58299 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:37:09.438918   58299 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:37:09.438991   58299 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:37:09.456400   58299 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:37:09.456489   58299 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:37:09.456543   58299 kubeadm.go:310] OS: Linux
	I0916 10:37:09.456616   58299 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:37:09.456698   58299 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:37:09.456774   58299 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:37:09.456844   58299 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:37:09.456945   58299 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:37:09.457039   58299 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:37:09.457112   58299 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:37:09.457181   58299 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:37:09.457254   58299 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:37:09.509540   58299 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:37:09.509704   58299 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:37:09.509879   58299 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:37:09.515810   58299 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:37:09.518861   58299 out.go:235]   - Generating certificates and keys ...
	I0916 10:37:09.518989   58299 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:37:09.519057   58299 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:37:09.764883   58299 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:37:09.936413   58299 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:37:10.049490   58299 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:37:10.126312   58299 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:37:10.382170   58299 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:37:10.382328   58299 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-107957 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:37:10.563937   58299 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:37:10.564073   58299 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-107957 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:37:10.779144   58299 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:37:10.969132   58299 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:37:11.165366   58299 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:37:11.165487   58299 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:37:11.276973   58299 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:37:11.364644   58299 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:37:11.593022   58299 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:37:11.701769   58299 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:37:12.007156   58299 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:37:12.007629   58299 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:37:12.010092   58299 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:37:12.012490   58299 out.go:235]   - Booting up control plane ...
	I0916 10:37:12.012645   58299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:37:12.012798   58299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:37:12.012887   58299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:37:12.021074   58299 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:37:12.026362   58299 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:37:12.026435   58299 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:37:12.104718   58299 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:37:12.104865   58299 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:37:12.606352   58299 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.775519ms
	I0916 10:37:12.606466   58299 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:37:18.648021   58299 kubeadm.go:310] [api-check] The API server is healthy after 6.041604794s
	I0916 10:37:18.658951   58299 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:37:18.670867   58299 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:37:19.192445   58299 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:37:19.192661   58299 kubeadm.go:310] [mark-control-plane] Marking the node ha-107957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:37:19.200551   58299 kubeadm.go:310] [bootstrap-token] Using token: lf37vj.8fzapfwp2hty22qd
	I0916 10:37:19.201990   58299 out.go:235]   - Configuring RBAC rules ...
	I0916 10:37:19.202135   58299 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:37:19.207027   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:37:19.213328   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:37:19.215800   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:37:19.218422   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:37:19.220911   58299 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:37:19.230308   58299 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:37:19.476474   58299 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:37:20.054334   58299 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:37:20.055469   58299 kubeadm.go:310] 
	I0916 10:37:20.055582   58299 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:37:20.055603   58299 kubeadm.go:310] 
	I0916 10:37:20.055691   58299 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:37:20.055700   58299 kubeadm.go:310] 
	I0916 10:37:20.055744   58299 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:37:20.055814   58299 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:37:20.055905   58299 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:37:20.055919   58299 kubeadm.go:310] 
	I0916 10:37:20.055991   58299 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:37:20.056000   58299 kubeadm.go:310] 
	I0916 10:37:20.056063   58299 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:37:20.056072   58299 kubeadm.go:310] 
	I0916 10:37:20.056141   58299 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:37:20.056247   58299 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:37:20.056327   58299 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:37:20.056343   58299 kubeadm.go:310] 
	I0916 10:37:20.056429   58299 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:37:20.056498   58299 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:37:20.056504   58299 kubeadm.go:310] 
	I0916 10:37:20.056572   58299 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lf37vj.8fzapfwp2hty22qd \
	I0916 10:37:20.056673   58299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:37:20.056693   58299 kubeadm.go:310] 	--control-plane 
	I0916 10:37:20.056699   58299 kubeadm.go:310] 
	I0916 10:37:20.056835   58299 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:37:20.056854   58299 kubeadm.go:310] 
	I0916 10:37:20.056976   58299 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lf37vj.8fzapfwp2hty22qd \
	I0916 10:37:20.057144   58299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:37:20.059802   58299 kubeadm.go:310] W0916 10:37:09.436372    1317 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:37:20.060068   58299 kubeadm.go:310] W0916 10:37:09.436985    1317 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:37:20.060339   58299 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:37:20.060492   58299 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:37:20.060521   58299 cni.go:84] Creating CNI manager for ""
	I0916 10:37:20.060532   58299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:37:20.062585   58299 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:37:20.063824   58299 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:37:20.067518   58299 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:37:20.067539   58299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:37:20.085065   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:37:20.278834   58299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:37:20.278925   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:20.278937   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-107957 minikube.k8s.io/updated_at=2024_09_16T10_37_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-107957 minikube.k8s.io/primary=true
	I0916 10:37:20.286426   58299 ops.go:34] apiserver oom_adj: -16
	I0916 10:37:20.346453   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:20.846570   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:21.347558   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:21.846632   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:22.346973   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:22.846611   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:23.347416   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:23.846503   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:24.347506   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:24.420953   58299 kubeadm.go:1113] duration metric: took 4.142098503s to wait for elevateKubeSystemPrivileges
	I0916 10:37:24.420986   58299 kubeadm.go:394] duration metric: took 15.138663112s to StartCluster
	I0916 10:37:24.421003   58299 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:24.421066   58299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:37:24.421733   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:24.421949   58299 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:24.421977   58299 start.go:241] waiting for startup goroutines ...
	I0916 10:37:24.421993   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:37:24.421992   58299 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:37:24.422083   58299 addons.go:69] Setting storage-provisioner=true in profile "ha-107957"
	I0916 10:37:24.422090   58299 addons.go:69] Setting default-storageclass=true in profile "ha-107957"
	I0916 10:37:24.422102   58299 addons.go:234] Setting addon storage-provisioner=true in "ha-107957"
	I0916 10:37:24.422114   58299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-107957"
	I0916 10:37:24.422129   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:24.422166   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:24.422486   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:24.422567   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:24.442732   58299 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:37:24.443096   58299 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:37:24.443827   58299 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:37:24.444138   58299 addons.go:234] Setting addon default-storageclass=true in "ha-107957"
	I0916 10:37:24.444184   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:24.444776   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:24.449256   58299 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:37:24.450714   58299 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:37:24.450735   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:37:24.450791   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:24.463468   58299 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:37:24.463492   58299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:37:24.463556   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:24.473766   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:24.481511   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:24.518044   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:37:24.715638   58299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:37:24.716533   58299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:37:25.000987   58299 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:37:25.254569   58299 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:37:25.254602   58299 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:37:25.254706   58299 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:37:25.254718   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:25.254728   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:25.254733   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:25.261525   58299 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:37:25.262050   58299 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:37:25.262066   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:25.262074   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:25.262077   58299 round_trippers.go:473]     Content-Type: application/json
	I0916 10:37:25.262080   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:25.264110   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:25.265876   58299 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:37:25.267125   58299 addons.go:510] duration metric: took 845.133033ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:37:25.267157   58299 start.go:246] waiting for cluster config update ...
	I0916 10:37:25.267168   58299 start.go:255] writing updated cluster config ...
	I0916 10:37:25.268737   58299 out.go:201] 
	I0916 10:37:25.270294   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:25.270354   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:37:25.272185   58299 out.go:177] * Starting "ha-107957-m02" control-plane node in "ha-107957" cluster
	I0916 10:37:25.273722   58299 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:37:25.275133   58299 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:37:25.277028   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:25.277054   58299 cache.go:56] Caching tarball of preloaded images
	I0916 10:37:25.277117   58299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:37:25.277157   58299 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:37:25.277167   58299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:37:25.277237   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:37:25.296591   58299 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:37:25.296612   58299 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:37:25.296699   58299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:37:25.296718   58299 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:37:25.296724   58299 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:37:25.296733   58299 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:37:25.296741   58299 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:37:25.297950   58299 image.go:273] response: 
	I0916 10:37:25.354963   58299 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:37:25.355008   58299 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:37:25.355043   58299 start.go:360] acquireMachinesLock for ha-107957-m02: {Name:mkbd1a70c826dc0de88173dfa3a4a79ea68a23fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:37:25.355135   58299 start.go:364] duration metric: took 74.612µs to acquireMachinesLock for "ha-107957-m02"
	I0916 10:37:25.355163   58299 start.go:93] Provisioning new machine with config: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:25.355270   58299 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:37:25.357344   58299 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:37:25.357465   58299 start.go:159] libmachine.API.Create for "ha-107957" (driver="docker")
	I0916 10:37:25.357493   58299 client.go:168] LocalClient.Create starting
	I0916 10:37:25.357554   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:37:25.357584   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:25.357600   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:25.357655   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:37:25.357674   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:25.357683   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:25.357862   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:37:25.376207   58299 network_create.go:77] Found existing network {name:ha-107957 subnet:0xc001891350 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:37:25.376259   58299 kic.go:121] calculated static IP "192.168.49.3" for the "ha-107957-m02" container
	I0916 10:37:25.376329   58299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:37:25.393281   58299 cli_runner.go:164] Run: docker volume create ha-107957-m02 --label name.minikube.sigs.k8s.io=ha-107957-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:37:25.411592   58299 oci.go:103] Successfully created a docker volume ha-107957-m02
	I0916 10:37:25.411675   58299 cli_runner.go:164] Run: docker run --rm --name ha-107957-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m02 --entrypoint /usr/bin/test -v ha-107957-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:37:26.040595   58299 oci.go:107] Successfully prepared a docker volume ha-107957-m02
	I0916 10:37:26.040631   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:26.040654   58299 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:37:26.040730   58299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:37:30.365019   58299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.324233407s)
	I0916 10:37:30.365053   58299 kic.go:203] duration metric: took 4.324395448s to extract preloaded images to volume ...
	W0916 10:37:30.365194   58299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:37:30.365304   58299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:37:30.412924   58299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-107957-m02 --name ha-107957-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-107957-m02 --network ha-107957 --ip 192.168.49.3 --volume ha-107957-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:37:30.712859   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Running}}
	I0916 10:37:30.730995   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:37:30.750011   58299 cli_runner.go:164] Run: docker exec ha-107957-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:37:30.792867   58299 oci.go:144] the created container "ha-107957-m02" has a running status.
	I0916 10:37:30.792893   58299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa...
	I0916 10:37:31.034298   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:37:31.034413   58299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:37:31.060538   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:37:31.078037   58299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:37:31.078058   58299 kic_runner.go:114] Args: [docker exec --privileged ha-107957-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:37:31.130198   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:37:31.150041   58299 machine.go:93] provisionDockerMachine start ...
	I0916 10:37:31.150128   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:31.167046   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:31.167267   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:31.167277   58299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:37:31.380753   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:37:31.380781   58299 ubuntu.go:169] provisioning hostname "ha-107957-m02"
	I0916 10:37:31.380828   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:31.399796   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:31.400018   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:31.400033   58299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m02 && echo "ha-107957-m02" | sudo tee /etc/hostname
	I0916 10:37:31.544605   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:37:31.544683   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:31.561265   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:31.561506   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:31.561532   58299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:37:31.693615   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:37:31.693666   58299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:37:31.693687   58299 ubuntu.go:177] setting up certificates
	I0916 10:37:31.693702   58299 provision.go:84] configureAuth start
	I0916 10:37:31.693762   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:37:31.709997   58299 provision.go:143] copyHostCerts
	I0916 10:37:31.710033   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:31.710060   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:37:31.710069   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:31.710136   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:37:31.710216   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:31.710233   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:37:31.710240   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:31.710263   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:37:31.710305   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:31.710321   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:37:31.710327   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:31.710346   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:37:31.710416   58299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m02 san=[127.0.0.1 192.168.49.3 ha-107957-m02 localhost minikube]
	I0916 10:37:32.282283   58299 provision.go:177] copyRemoteCerts
	I0916 10:37:32.282343   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:37:32.282376   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.299781   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:32.395184   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:37:32.395245   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:37:32.417432   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:37:32.417512   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:37:32.439205   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:37:32.439281   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:37:32.461698   58299 provision.go:87] duration metric: took 767.984839ms to configureAuth
	I0916 10:37:32.461725   58299 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:37:32.461884   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:32.461973   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.478213   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:32.478401   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:32.478417   58299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:37:32.702104   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:37:32.702135   58299 machine.go:96] duration metric: took 1.552075835s to provisionDockerMachine
	I0916 10:37:32.702145   58299 client.go:171] duration metric: took 7.344647339s to LocalClient.Create
	I0916 10:37:32.702162   58299 start.go:167] duration metric: took 7.344697738s to libmachine.API.Create "ha-107957"
	I0916 10:37:32.702168   58299 start.go:293] postStartSetup for "ha-107957-m02" (driver="docker")
	I0916 10:37:32.702178   58299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:37:32.702230   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:37:32.702266   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.719256   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:32.818926   58299 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:37:32.821997   58299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:37:32.822029   58299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:37:32.822037   58299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:37:32.822043   58299 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:37:32.822052   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:37:32.822116   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:37:32.822202   58299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:37:32.822214   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:37:32.822322   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:37:32.830386   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:32.853244   58299 start.go:296] duration metric: took 151.062688ms for postStartSetup
	I0916 10:37:32.853622   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:37:32.870415   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:37:32.870701   58299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:37:32.870743   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.887578   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:32.978119   58299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:37:32.982241   58299 start.go:128] duration metric: took 7.62695291s to createHost
	I0916 10:37:32.982274   58299 start.go:83] releasing machines lock for "ha-107957-m02", held for 7.627124916s
	I0916 10:37:32.982354   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:37:33.002043   58299 out.go:177] * Found network options:
	I0916 10:37:33.003837   58299 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:37:33.005528   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:37:33.005577   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:37:33.005656   58299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:37:33.005706   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:33.005719   58299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:37:33.005766   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:33.022990   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:33.023413   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:33.261178   58299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:37:33.265498   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:33.283301   58299 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:37:33.283380   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:33.311542   58299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:37:33.311568   58299 start.go:495] detecting cgroup driver to use...
	I0916 10:37:33.311597   58299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:37:33.311665   58299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:37:33.325590   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:37:33.336328   58299 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:37:33.336378   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:37:33.348934   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:37:33.362149   58299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:37:33.436372   58299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:37:33.516403   58299 docker.go:233] disabling docker service ...
	I0916 10:37:33.516466   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:37:33.534110   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:37:33.545090   58299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:37:33.618580   58299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:37:33.699969   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:37:33.711041   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:37:33.725983   58299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:37:33.726037   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.735506   58299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:37:33.735567   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.744790   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.754076   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.763393   58299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:37:33.771975   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.780729   58299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.794841   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.803921   58299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:37:33.812615   58299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:37:33.820773   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:33.901372   58299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:37:34.010086   58299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:37:34.010146   58299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:37:34.013617   58299 start.go:563] Will wait 60s for crictl version
	I0916 10:37:34.013673   58299 ssh_runner.go:195] Run: which crictl
	I0916 10:37:34.016752   58299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:37:34.049238   58299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:37:34.049315   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:34.081490   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:34.117067   58299 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:37:34.118543   58299 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:37:34.120114   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:37:34.137233   58299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:37:34.140814   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:34.151343   58299 mustload.go:65] Loading cluster: ha-107957
	I0916 10:37:34.151521   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:34.151737   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:34.168300   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:34.168549   58299 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.3
	I0916 10:37:34.168559   58299 certs.go:194] generating shared ca certs ...
	I0916 10:37:34.168572   58299 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:34.168722   58299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:37:34.168773   58299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:37:34.168783   58299 certs.go:256] generating profile certs ...
	I0916 10:37:34.168859   58299 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:37:34.168884   58299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b
	I0916 10:37:34.168899   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 10:37:34.301229   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b ...
	I0916 10:37:34.301258   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b: {Name:mk774b827afeed5d627c66ef74c7608e9a851512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:34.301452   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b ...
	I0916 10:37:34.301469   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b: {Name:mk992bd5f4fa93f43a7256d7e5350f32ffad3267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:34.301547   58299 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:37:34.301678   58299 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:37:34.301801   58299 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:37:34.301818   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:37:34.301839   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:37:34.301852   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:37:34.301865   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:37:34.301879   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:37:34.301891   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:37:34.301902   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:37:34.301914   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:37:34.301962   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:37:34.301992   58299 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:37:34.302001   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:37:34.302023   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:37:34.302046   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:37:34.302066   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:37:34.302102   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:34.302127   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.302144   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.302161   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.302225   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:34.318470   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:34.405685   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:37:34.409645   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:37:34.421144   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:37:34.424272   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:37:34.436239   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:37:34.439711   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:37:34.451270   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:37:34.454457   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:37:34.465684   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:37:34.468808   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:37:34.479927   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:37:34.483274   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:37:34.494765   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:37:34.518944   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:37:34.540774   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:37:34.562951   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:37:34.585188   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 10:37:34.607554   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:37:34.630026   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:37:34.652393   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:37:34.674836   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:37:34.697941   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:37:34.720114   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:37:34.742041   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:37:34.758526   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:37:34.774581   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:37:34.791700   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:37:34.807947   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:37:34.824874   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:37:34.841359   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:37:34.858359   58299 ssh_runner.go:195] Run: openssl version
	I0916 10:37:34.863194   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:37:34.871960   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.875277   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.875384   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.881694   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:37:34.890738   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:37:34.899773   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.902995   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.903050   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.909715   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:37:34.918851   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:37:34.927848   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.931537   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.931593   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.938226   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:37:34.947489   58299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:37:34.950710   58299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:37:34.950756   58299 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0916 10:37:34.950844   58299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:37:34.950873   58299 kube-vip.go:115] generating kube-vip config ...
	I0916 10:37:34.950904   58299 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:37:34.961898   58299 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:37:34.961973   58299 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:37:34.962022   58299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:37:34.970528   58299 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:37:34.970590   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:37:34.978912   58299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:37:34.995923   58299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:37:35.012920   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:37:35.029471   58299 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:37:35.032620   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:35.042418   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:35.119733   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:37:35.133407   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:35.133649   58299 start.go:317] joinCluster: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:37:35.133739   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:37:35.133789   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:35.154278   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:35.298604   58299 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:35.298644   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5wd6mt.whossothqn01zo81 --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 10:37:39.016515   58299 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5wd6mt.whossothqn01zo81 --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (3.717843693s)
	I0916 10:37:39.016585   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:37:39.911856   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-107957-m02 minikube.k8s.io/updated_at=2024_09_16T10_37_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-107957 minikube.k8s.io/primary=false
	I0916 10:37:40.021485   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-107957-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:37:40.120388   58299 start.go:319] duration metric: took 4.986732728s to joinCluster
	I0916 10:37:40.120458   58299 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:40.120871   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:40.122048   58299 out.go:177] * Verifying Kubernetes components...
	I0916 10:37:40.124605   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:40.710636   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:37:40.800465   58299 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:37:40.800815   58299 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:37:40.800901   58299 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:37:40.801246   58299 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:37:40.801420   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:40.801432   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:40.801440   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:40.801445   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:40.811587   58299 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:37:41.302301   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:41.302327   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:41.302339   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:41.302344   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:41.306308   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:37:41.802181   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:41.802204   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:41.802214   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:41.802219   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:41.804886   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:42.301662   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:42.301685   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:42.301693   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:42.301698   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:42.304381   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:42.802328   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:42.802353   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:42.802364   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:42.802372   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:42.807328   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:37:42.807852   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:43.302198   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:43.302219   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:43.302226   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:43.302230   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:43.305035   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:43.801527   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:43.801552   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:43.801564   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:43.801571   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:43.804080   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:44.301987   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:44.302007   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:44.302013   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:44.302017   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:44.304534   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:44.801872   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:44.801893   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:44.801903   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:44.801910   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:44.804820   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:45.301535   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:45.301555   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:45.301563   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:45.301567   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:45.304170   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:45.306018   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:45.801549   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:45.801571   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:45.801578   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:45.801582   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:45.804831   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:37:46.301516   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:46.301536   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:46.301543   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:46.301547   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:46.304387   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:46.801814   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:46.801838   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:46.801847   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:46.801851   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:46.804308   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:47.301754   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:47.301779   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:47.301787   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:47.301791   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:47.304275   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:47.802061   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:47.802082   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:47.802090   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:47.802094   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:47.804821   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:47.805278   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:48.301505   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:48.301525   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:48.301533   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:48.301537   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:48.304326   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:48.802241   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:48.802263   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:48.802274   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:48.802281   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:48.805084   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:49.301592   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:49.301621   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:49.301633   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:49.301639   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:49.303956   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:49.802191   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:49.802227   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:49.802234   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:49.802239   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:49.804941   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:49.805629   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:50.301543   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:50.301585   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:50.301594   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:50.301600   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:50.304001   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:50.801527   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:50.801547   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:50.801555   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:50.801559   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:50.804309   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:51.302366   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:51.302390   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:51.302401   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:51.302408   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:51.304894   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:51.801513   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:51.801535   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:51.801545   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:51.801553   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:51.804240   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:52.302137   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:52.302163   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:52.302173   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:52.302179   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:52.304846   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:52.305481   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:52.801547   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:52.801576   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:52.801589   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:52.801595   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:52.804369   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:53.302308   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:53.302328   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:53.302335   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:53.302339   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:53.305223   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:53.801831   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:53.801897   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:53.801910   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:53.801915   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:53.804482   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:54.302458   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:54.302481   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:54.302489   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:54.302495   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:54.305238   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:54.305894   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:54.801464   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:54.801484   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:54.801491   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:54.801496   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:54.804214   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:55.301815   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:55.301838   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:55.301845   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:55.301850   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:55.304496   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:55.802365   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:55.802385   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:55.802393   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:55.802398   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:55.805290   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:56.302157   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:56.302178   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:56.302186   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:56.302189   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:56.304850   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:56.801532   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:56.801553   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:56.801561   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:56.801565   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:56.804488   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:56.805160   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:57.302416   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:57.302436   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:57.302444   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:57.302447   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:57.305363   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:57.802288   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:57.802321   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:57.802333   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:57.802341   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:57.811723   58299 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 10:37:58.302067   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:58.302089   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:58.302098   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:58.302100   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:58.304659   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:58.801524   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:58.801544   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:58.801551   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:58.801557   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:58.804234   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:59.302139   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:59.302158   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:59.302166   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:59.302169   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:59.304804   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:59.305320   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:59.802270   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:59.802295   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:59.802309   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:59.802313   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:59.804903   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:00.301734   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:00.301757   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:00.301765   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:00.301769   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:00.304628   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:00.801514   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:00.801535   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:00.801543   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:00.801546   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:00.804443   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:01.302374   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:01.302397   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:01.302412   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:01.302415   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:01.305170   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:01.305665   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:01.802045   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:01.802066   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:01.802074   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:01.802079   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:01.804686   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:02.301476   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:02.301496   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:02.301504   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:02.301508   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:02.304452   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:02.802138   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:02.802166   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:02.802174   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:02.802177   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:02.804937   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:03.301509   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:03.301531   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:03.301547   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:03.301567   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:03.304473   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:03.802309   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:03.802379   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:03.802392   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:03.802400   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:03.804934   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:03.805395   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:04.301506   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:04.301529   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:04.301540   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:04.301546   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:04.304289   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:04.801524   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:04.801547   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:04.801555   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:04.801559   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:04.804452   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:05.302041   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:05.302067   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:05.302075   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:05.302079   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:05.304793   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:05.801515   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:05.801537   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:05.801545   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:05.801550   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:05.804379   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:06.302227   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:06.302252   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:06.302261   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:06.302267   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:06.305289   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:06.305885   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:06.802185   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:06.802208   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:06.802216   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:06.802219   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:06.804966   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:07.301478   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:07.301498   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:07.301506   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:07.301510   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:07.304142   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:07.802119   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:07.802144   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:07.802154   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:07.802160   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:07.804835   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:08.301551   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:08.301571   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:08.301582   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:08.301587   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:08.304309   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:08.802410   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:08.802431   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:08.802441   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:08.802454   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:08.805162   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:08.805628   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:09.302238   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:09.302262   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:09.302274   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:09.302280   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:09.304866   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:09.802217   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:09.802240   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:09.802248   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:09.802252   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:09.804934   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:10.301534   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:10.301558   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:10.301570   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:10.301576   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:10.304330   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:10.802225   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:10.802247   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:10.802255   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:10.802260   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:10.804948   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:11.301530   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:11.301552   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:11.301566   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:11.301571   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:11.304365   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:11.304844   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:11.802203   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:11.802230   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:11.802240   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:11.802247   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:11.805188   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:12.302155   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:12.302178   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:12.302188   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:12.302193   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:12.304924   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:12.801529   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:12.801549   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:12.801555   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:12.801558   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:12.804066   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:13.301526   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:13.301546   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:13.301554   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:13.301559   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:13.304301   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:13.304921   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:13.801876   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:13.801897   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:13.801908   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:13.801913   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:13.804574   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:14.302479   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:14.302500   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:14.302508   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:14.302512   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:14.305106   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:14.802395   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:14.802416   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:14.802424   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:14.802428   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:14.805141   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:15.301516   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:15.301537   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:15.301545   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:15.301549   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:15.304261   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:15.801743   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:15.801777   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:15.801785   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:15.801788   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:15.804637   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:15.805139   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:16.301470   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:16.301496   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:16.301503   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:16.301507   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:16.304238   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:16.802170   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:16.802193   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:16.802200   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:16.802204   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:16.804626   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:17.302462   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:17.302488   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:17.302502   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:17.302508   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:17.305592   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:17.802472   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:17.802493   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:17.802501   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:17.802506   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:17.805055   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:17.805544   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:18.301522   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:18.301541   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:18.301550   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:18.301555   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:18.304290   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:18.802051   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:18.802090   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:18.802099   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:18.802103   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:18.805022   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:19.301527   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:19.301548   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:19.301556   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:19.301561   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:19.304219   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:19.802426   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:19.802447   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:19.802454   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:19.802461   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:19.805114   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:19.805765   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:20.301502   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:20.301544   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.301553   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.301557   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.304392   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.802427   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:20.802454   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.802467   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.802475   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.805184   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.805685   58299 node_ready.go:49] node "ha-107957-m02" has status "Ready":"True"
	I0916 10:38:20.805707   58299 node_ready.go:38] duration metric: took 40.004435194s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:38:20.805739   58299 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:38:20.805837   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:20.805853   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.805862   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.805869   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.809565   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:20.815076   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.815153   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:38:20.815163   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.815170   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.815173   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.817483   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.818169   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:20.818186   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.818196   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.818200   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.820284   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.820794   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.820810   58299 pod_ready.go:82] duration metric: took 5.712221ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.820819   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.820876   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:38:20.820883   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.820890   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.820894   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.823188   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.823919   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:20.823936   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.823944   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.823948   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.826129   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.826616   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.826635   58299 pod_ready.go:82] duration metric: took 5.808507ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.826644   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.826696   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:38:20.826704   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.826711   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.826717   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.830219   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:20.830919   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:20.830938   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.830949   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.830953   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.834909   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:20.835471   58299 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.835493   58299 pod_ready.go:82] duration metric: took 8.841297ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.835506   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.835573   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:38:20.835585   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.835594   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.835603   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.837675   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.838355   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:20.838372   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.838382   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.838388   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.840341   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:38:20.840760   58299 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.840777   58299 pod_ready.go:82] duration metric: took 5.263219ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.840795   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.003172   58299 request.go:632] Waited for 162.309743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:38:21.003259   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:38:21.003269   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.003277   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.003280   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.006190   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.203208   58299 request.go:632] Waited for 196.385519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:21.203296   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:21.203304   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.203318   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.203330   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.206174   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.206680   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:21.206700   58299 pod_ready.go:82] duration metric: took 365.897277ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.206710   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.402762   58299 request.go:632] Waited for 195.962152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:38:21.402841   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:38:21.402857   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.402872   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.402881   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.405784   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.602767   58299 request.go:632] Waited for 196.360303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:21.602845   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:21.602854   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.602862   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.602870   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.605413   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.605893   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:21.605911   58299 pod_ready.go:82] duration metric: took 399.19447ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.605921   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.803002   58299 request.go:632] Waited for 197.006404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:38:21.803053   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:38:21.803058   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.803065   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.803073   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.805695   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.002864   58299 request.go:632] Waited for 196.382399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.002937   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.002944   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.002957   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.002968   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.005777   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.006329   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:22.006351   58299 pod_ready.go:82] duration metric: took 400.424868ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.006367   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.203384   58299 request.go:632] Waited for 196.945751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:38:22.203460   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:38:22.203465   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.203477   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.203484   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.206358   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.403327   58299 request.go:632] Waited for 196.250326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:22.403377   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:22.403382   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.403390   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.403394   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.406247   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.406759   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:22.406778   58299 pod_ready.go:82] duration metric: took 400.403552ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.406788   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.602937   58299 request.go:632] Waited for 196.085399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:38:22.603008   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:38:22.603015   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.603022   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.603030   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.606012   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.803074   58299 request.go:632] Waited for 196.341486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.803138   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.803149   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.803157   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.803162   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.805681   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.806114   58299 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:22.806132   58299 pod_ready.go:82] duration metric: took 399.337302ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.806144   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.003288   58299 request.go:632] Waited for 197.070192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:38:23.003363   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:38:23.003368   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.003375   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.003380   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.006475   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:23.202403   58299 request.go:632] Waited for 195.314585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:23.202476   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:23.202484   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.202493   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.202500   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.205303   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:23.205798   58299 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:23.205818   58299 pod_ready.go:82] duration metric: took 399.666408ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.205831   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.402948   58299 request.go:632] Waited for 197.03533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:38:23.403030   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:38:23.403035   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.403043   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.403049   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.405757   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:23.602642   58299 request.go:632] Waited for 196.301879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:23.602734   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:23.602747   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.602755   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.602759   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.605540   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:23.606064   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:23.606083   58299 pod_ready.go:82] duration metric: took 400.245071ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.606093   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.803170   58299 request.go:632] Waited for 197.002757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:38:23.803255   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:38:23.803265   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.803273   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.803326   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.806066   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:24.002961   58299 request.go:632] Waited for 196.361904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:24.003034   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:24.003046   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.003064   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.003090   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.005862   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:24.006294   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:24.006311   58299 pod_ready.go:82] duration metric: took 400.210196ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:24.006321   58299 pod_ready.go:39] duration metric: took 3.200563132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:38:24.006335   58299 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:38:24.006403   58299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:38:24.017393   58299 api_server.go:72] duration metric: took 43.896903419s to wait for apiserver process to appear ...
	I0916 10:38:24.017419   58299 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:38:24.017444   58299 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:38:24.022347   58299 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:38:24.022418   58299 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:38:24.022426   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.022434   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.022438   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.023255   58299 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:38:24.023378   58299 api_server.go:141] control plane version: v1.31.1
	I0916 10:38:24.023396   58299 api_server.go:131] duration metric: took 5.971353ms to wait for apiserver health ...
	I0916 10:38:24.023404   58299 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:38:24.202861   58299 request.go:632] Waited for 179.38976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.202958   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.202970   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.202979   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.202987   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.207117   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:24.211158   58299 system_pods.go:59] 17 kube-system pods found
	I0916 10:38:24.211185   58299 system_pods.go:61] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:38:24.211190   58299 system_pods.go:61] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:38:24.211194   58299 system_pods.go:61] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:38:24.211198   58299 system_pods.go:61] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:38:24.211201   58299 system_pods.go:61] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:38:24.211204   58299 system_pods.go:61] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:38:24.211207   58299 system_pods.go:61] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:38:24.211210   58299 system_pods.go:61] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:38:24.211213   58299 system_pods.go:61] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:38:24.211216   58299 system_pods.go:61] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:38:24.211220   58299 system_pods.go:61] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:38:24.211223   58299 system_pods.go:61] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:38:24.211226   58299 system_pods.go:61] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:38:24.211229   58299 system_pods.go:61] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:38:24.211231   58299 system_pods.go:61] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:38:24.211234   58299 system_pods.go:61] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:38:24.211236   58299 system_pods.go:61] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:38:24.211244   58299 system_pods.go:74] duration metric: took 187.832357ms to wait for pod list to return data ...
	I0916 10:38:24.211254   58299 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:38:24.402614   58299 request.go:632] Waited for 191.282955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:38:24.402708   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:38:24.402722   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.402731   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.402741   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.405729   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:24.405961   58299 default_sa.go:45] found service account: "default"
	I0916 10:38:24.405980   58299 default_sa.go:55] duration metric: took 194.718283ms for default service account to be created ...
	I0916 10:38:24.405991   58299 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:38:24.603485   58299 request.go:632] Waited for 197.425301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.603565   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.603574   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.603591   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.603746   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.608223   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:24.612176   58299 system_pods.go:86] 17 kube-system pods found
	I0916 10:38:24.612232   58299 system_pods.go:89] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:38:24.612245   58299 system_pods.go:89] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:38:24.612255   58299 system_pods.go:89] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:38:24.612260   58299 system_pods.go:89] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:38:24.612266   58299 system_pods.go:89] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:38:24.612270   58299 system_pods.go:89] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:38:24.612274   58299 system_pods.go:89] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:38:24.612283   58299 system_pods.go:89] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:38:24.612287   58299 system_pods.go:89] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:38:24.612293   58299 system_pods.go:89] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:38:24.612297   58299 system_pods.go:89] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:38:24.612301   58299 system_pods.go:89] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:38:24.612304   58299 system_pods.go:89] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:38:24.612310   58299 system_pods.go:89] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:38:24.612314   58299 system_pods.go:89] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:38:24.612319   58299 system_pods.go:89] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:38:24.612326   58299 system_pods.go:89] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:38:24.612332   58299 system_pods.go:126] duration metric: took 206.336369ms to wait for k8s-apps to be running ...
	I0916 10:38:24.612341   58299 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:38:24.612385   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:38:24.624271   58299 system_svc.go:56] duration metric: took 11.92066ms WaitForService to wait for kubelet
	I0916 10:38:24.624302   58299 kubeadm.go:582] duration metric: took 44.503819786s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:38:24.624328   58299 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:38:24.802750   58299 request.go:632] Waited for 178.34473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:38:24.802803   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:38:24.802807   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.802815   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.802819   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.805865   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:24.806570   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:38:24.806594   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:38:24.806613   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:38:24.806617   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:38:24.806621   58299 node_conditions.go:105] duration metric: took 182.289173ms to run NodePressure ...
	I0916 10:38:24.806634   58299 start.go:241] waiting for startup goroutines ...
	I0916 10:38:24.806659   58299 start.go:255] writing updated cluster config ...
	I0916 10:38:24.808791   58299 out.go:201] 
	I0916 10:38:24.810381   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:24.810473   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:38:24.812240   58299 out.go:177] * Starting "ha-107957-m03" control-plane node in "ha-107957" cluster
	I0916 10:38:24.814284   58299 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:38:24.815912   58299 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:38:24.817396   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:24.817420   58299 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:24.817492   58299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:38:24.817547   58299 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:24.817589   58299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:24.817732   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:38:24.837951   58299 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:38:24.837969   58299 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:38:24.838043   58299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:38:24.838061   58299 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:38:24.838066   58299 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:38:24.838073   58299 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:38:24.838083   58299 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:38:24.841619   58299 image.go:273] response: 
	I0916 10:38:24.915250   58299 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:38:24.915289   58299 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:38:24.915323   58299 start.go:360] acquireMachinesLock for ha-107957-m03: {Name:mk0f035d5dad9998d086b052d83625d4474d070c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:24.915460   58299 start.go:364] duration metric: took 112.213µs to acquireMachinesLock for "ha-107957-m03"
	I0916 10:38:24.915490   58299 start.go:93] Provisioning new machine with config: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:24.915652   58299 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 10:38:24.918037   58299 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:38:24.918168   58299 start.go:159] libmachine.API.Create for "ha-107957" (driver="docker")
	I0916 10:38:24.918198   58299 client.go:168] LocalClient.Create starting
	I0916 10:38:24.918280   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:38:24.918316   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:24.918336   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:24.918402   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:38:24.918429   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:24.918446   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:24.918718   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:38:24.937284   58299 network_create.go:77] Found existing network {name:ha-107957 subnet:0xc001c42ff0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:38:24.937324   58299 kic.go:121] calculated static IP "192.168.49.4" for the "ha-107957-m03" container
	I0916 10:38:24.937431   58299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:38:24.955937   58299 cli_runner.go:164] Run: docker volume create ha-107957-m03 --label name.minikube.sigs.k8s.io=ha-107957-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:38:24.974526   58299 oci.go:103] Successfully created a docker volume ha-107957-m03
	I0916 10:38:24.974625   58299 cli_runner.go:164] Run: docker run --rm --name ha-107957-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m03 --entrypoint /usr/bin/test -v ha-107957-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:38:25.480664   58299 oci.go:107] Successfully prepared a docker volume ha-107957-m03
	I0916 10:38:25.480707   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:25.480730   58299 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:38:25.480804   58299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:38:29.894616   58299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413762946s)
	I0916 10:38:29.894655   58299 kic.go:203] duration metric: took 4.413918091s to extract preloaded images to volume ...
	W0916 10:38:29.894789   58299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:38:29.894879   58299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:38:29.944523   58299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-107957-m03 --name ha-107957-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-107957-m03 --network ha-107957 --ip 192.168.49.4 --volume ha-107957-m03:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:38:30.226366   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Running}}
	I0916 10:38:30.246741   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:38:30.264758   58299 cli_runner.go:164] Run: docker exec ha-107957-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:38:30.307422   58299 oci.go:144] the created container "ha-107957-m03" has a running status.
	I0916 10:38:30.307452   58299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa...
	I0916 10:38:30.466012   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:38:30.466061   58299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:38:30.490382   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:38:30.509012   58299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:38:30.509040   58299 kic_runner.go:114] Args: [docker exec --privileged ha-107957-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:38:30.558367   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:38:30.577094   58299 machine.go:93] provisionDockerMachine start ...
	I0916 10:38:30.577189   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:30.602673   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:30.602963   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:30.602979   58299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:38:30.603835   58299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37776->127.0.0.1:32793: read: connection reset by peer
	I0916 10:38:33.737131   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m03
	
	I0916 10:38:33.737162   58299 ubuntu.go:169] provisioning hostname "ha-107957-m03"
	I0916 10:38:33.737217   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:33.754194   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:33.754364   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:33.754377   58299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m03 && echo "ha-107957-m03" | sudo tee /etc/hostname
	I0916 10:38:33.900681   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m03
	
	I0916 10:38:33.900767   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:33.918561   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:33.918794   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:33.918823   58299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:38:34.049313   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:34.049368   58299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:38:34.049395   58299 ubuntu.go:177] setting up certificates
	I0916 10:38:34.049408   58299 provision.go:84] configureAuth start
	I0916 10:38:34.049488   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:38:34.065664   58299 provision.go:143] copyHostCerts
	I0916 10:38:34.065709   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:38:34.065741   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:38:34.065754   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:38:34.065828   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:38:34.065923   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:38:34.065950   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:38:34.065960   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:38:34.065997   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:38:34.066054   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:38:34.066078   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:38:34.066087   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:38:34.066122   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:38:34.066189   58299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m03 san=[127.0.0.1 192.168.49.4 ha-107957-m03 localhost minikube]
	I0916 10:38:34.276571   58299 provision.go:177] copyRemoteCerts
	I0916 10:38:34.276624   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:38:34.276656   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.293215   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:34.386186   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:38:34.386268   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:38:34.409325   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:38:34.409403   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:38:34.432158   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:38:34.432213   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:38:34.454766   58299 provision.go:87] duration metric: took 405.337346ms to configureAuth
	I0916 10:38:34.454791   58299 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:38:34.455029   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:34.455144   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.471918   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:34.472102   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:34.472121   58299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:38:34.694736   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:38:34.694764   58299 machine.go:96] duration metric: took 4.117643787s to provisionDockerMachine
	I0916 10:38:34.694775   58299 client.go:171] duration metric: took 9.776568912s to LocalClient.Create
	I0916 10:38:34.694792   58299 start.go:167] duration metric: took 9.77662729s to libmachine.API.Create "ha-107957"
	I0916 10:38:34.694799   58299 start.go:293] postStartSetup for "ha-107957-m03" (driver="docker")
	I0916 10:38:34.694811   58299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:38:34.694880   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:38:34.694929   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.712379   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:34.806418   58299 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:38:34.809963   58299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:38:34.809996   58299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:38:34.810004   58299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:38:34.810011   58299 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:38:34.810020   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:38:34.810074   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:38:34.810142   58299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:38:34.810151   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:38:34.810231   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:38:34.818424   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:38:34.842357   58299 start.go:296] duration metric: took 147.542838ms for postStartSetup
	I0916 10:38:34.842746   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:38:34.859806   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:38:34.860057   58299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:38:34.860097   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.876488   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:34.970095   58299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:38:34.974100   58299 start.go:128] duration metric: took 10.058431856s to createHost
	I0916 10:38:34.974126   58299 start.go:83] releasing machines lock for "ha-107957-m03", held for 10.058651431s
	I0916 10:38:34.974186   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:38:34.993465   58299 out.go:177] * Found network options:
	I0916 10:38:34.994925   58299 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:38:34.996440   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:38:34.996464   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:38:34.996485   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:38:34.996496   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:38:34.996563   58299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:38:34.996595   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.996639   58299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:38:34.996708   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:35.015457   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:35.015686   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:35.245067   58299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:38:35.249634   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:38:35.267233   58299 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:38:35.267298   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:38:35.294721   58299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:38:35.294744   58299 start.go:495] detecting cgroup driver to use...
	I0916 10:38:35.294776   58299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:38:35.294817   58299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:38:35.308988   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:38:35.320707   58299 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:38:35.320756   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:38:35.334091   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:38:35.347248   58299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:38:35.423897   58299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:38:35.508610   58299 docker.go:233] disabling docker service ...
	I0916 10:38:35.508681   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:38:35.527435   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:38:35.539623   58299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:38:35.615361   58299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:38:35.705579   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:38:35.716556   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:38:35.732322   58299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:38:35.732390   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.742382   58299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:38:35.742444   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.752000   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.761540   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.770919   58299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:38:35.779702   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.789271   58299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.804581   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.814492   58299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:38:35.822427   58299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:38:35.830863   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:35.894987   58299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:38:35.994767   58299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:38:35.994837   58299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:38:35.998643   58299 start.go:563] Will wait 60s for crictl version
	I0916 10:38:35.998710   58299 ssh_runner.go:195] Run: which crictl
	I0916 10:38:36.002002   58299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:38:36.033661   58299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:38:36.033739   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:38:36.066846   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:38:36.103997   58299 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:38:36.105552   58299 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:38:36.107025   58299 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:38:36.108392   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:38:36.124868   58299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:38:36.128276   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:36.138504   58299 mustload.go:65] Loading cluster: ha-107957
	I0916 10:38:36.138756   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:36.139027   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:38:36.156422   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:38:36.156692   58299 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.4
	I0916 10:38:36.156706   58299 certs.go:194] generating shared ca certs ...
	I0916 10:38:36.156718   58299 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:36.156856   58299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:38:36.156919   58299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:38:36.156933   58299 certs.go:256] generating profile certs ...
	I0916 10:38:36.157042   58299 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:38:36.157079   58299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518
	I0916 10:38:36.157099   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 10:38:36.471351   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518 ...
	I0916 10:38:36.471379   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518: {Name:mk86ec6e4db4e3ee25dab34a66ccccc54b2fa772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:36.471548   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518 ...
	I0916 10:38:36.471560   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518: {Name:mk7f635af130dc443af1fb5996a9a27aeb6677f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:36.471631   58299 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:38:36.471764   58299 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:38:36.471890   58299 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:38:36.471905   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:38:36.471918   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:38:36.471928   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:38:36.471985   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:38:36.472003   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:38:36.472013   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:38:36.472022   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:38:36.472031   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:38:36.472077   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:38:36.472106   58299 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:38:36.472120   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:38:36.472152   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:38:36.472181   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:38:36.472218   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:38:36.472272   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:38:36.472312   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:36.472334   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:38:36.472353   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:38:36.472413   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:38:36.489765   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:38:36.589733   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:38:36.593224   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:38:36.604469   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:38:36.607605   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:38:36.618797   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:38:36.621830   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:38:36.633365   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:38:36.636572   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:38:36.647677   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:38:36.650797   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:38:36.663187   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:38:36.666395   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:38:36.678179   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:38:36.703649   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:38:36.729425   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:38:36.757180   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:38:36.782176   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 10:38:36.804453   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:38:36.826154   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:38:36.849179   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:38:36.872612   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:38:36.895194   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:38:36.917519   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:38:36.940358   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:38:36.956876   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:38:36.973133   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:38:36.989169   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:38:37.005196   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:38:37.021420   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:38:37.036917   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:38:37.052659   58299 ssh_runner.go:195] Run: openssl version
	I0916 10:38:37.057616   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:38:37.065915   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:38:37.068939   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:38:37.068983   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:38:37.075024   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:38:37.083561   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:38:37.091935   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:37.095084   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:37.095131   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:37.101373   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:38:37.109566   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:38:37.118196   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:38:37.121300   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:38:37.121381   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:38:37.127557   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:38:37.136439   58299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:38:37.139455   58299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:38:37.139509   58299 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.1 crio true true} ...
	I0916 10:38:37.139614   58299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:38:37.139646   58299 kube-vip.go:115] generating kube-vip config ...
	I0916 10:38:37.139685   58299 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:38:37.151097   58299 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:38:37.151174   58299 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:38:37.151227   58299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:38:37.159168   58299 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:38:37.159225   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:38:37.167003   58299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:38:37.182637   58299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:38:37.199093   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:38:37.215506   58299 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:38:37.219086   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:37.229046   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:37.307091   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:38:37.319732   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:38:37.320004   58299 start.go:317] joinCluster: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:37.320166   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:38:37.320215   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:38:37.338002   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:38:37.478150   58299 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:37.478202   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mokpad.4jldtvkjjjsar6qe --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 10:38:41.717081   58299 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mokpad.4jldtvkjjjsar6qe --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (4.238853787s)
	I0916 10:38:41.717123   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:38:42.603720   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-107957-m03 minikube.k8s.io/updated_at=2024_09_16T10_38_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-107957 minikube.k8s.io/primary=false
	I0916 10:38:42.697412   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-107957-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:38:42.789833   58299 start.go:319] duration metric: took 5.469823264s to joinCluster
	I0916 10:38:42.789915   58299 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:42.790199   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:42.791936   58299 out.go:177] * Verifying Kubernetes components...
	I0916 10:38:42.793445   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:43.204937   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:38:43.222289   58299 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:38:43.222631   58299 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:38:43.222711   58299 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:38:43.222966   58299 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m03" to be "Ready" ...
	I0916 10:38:43.223047   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:43.223056   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:43.223067   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:43.223075   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:43.226173   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:43.724068   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:43.724092   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:43.724100   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:43.724104   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:43.726808   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:44.223643   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:44.223663   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:44.223671   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:44.223675   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:44.226814   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:44.723990   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:44.724010   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:44.724019   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:44.724024   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:44.726833   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:45.223744   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:45.223765   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:45.223775   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:45.223780   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:45.226384   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:45.226829   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:45.723144   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:45.723164   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:45.723172   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:45.723177   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:45.725902   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:46.223799   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:46.223817   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:46.223826   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:46.223830   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:46.226521   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:46.723387   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:46.723412   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:46.723424   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:46.723429   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:46.726256   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:47.223145   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:47.223163   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:47.223173   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:47.223180   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:47.225957   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:47.723714   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:47.723744   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:47.723801   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:47.723810   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:47.726372   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:47.726834   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:48.223868   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:48.223890   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:48.223899   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:48.223905   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:48.226363   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:48.723209   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:48.723232   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:48.723240   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:48.723244   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:48.726007   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:49.223841   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:49.223860   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:49.223867   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:49.223873   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:49.226386   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:49.723552   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:49.723576   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:49.723584   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:49.723588   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:49.726465   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:49.728853   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:50.223252   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:50.223279   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:50.223287   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:50.223291   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:50.226029   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:50.723919   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:50.723941   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:50.723951   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:50.723958   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:50.726487   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:51.223373   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:51.223392   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:51.223400   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:51.223404   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:51.226038   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:51.723491   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:51.723516   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:51.723526   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:51.723530   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:51.726404   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:52.223302   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:52.223322   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:52.223330   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:52.223333   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:52.225843   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:52.226264   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:52.723624   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:52.723644   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:52.723652   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:52.723657   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:52.726430   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:53.223234   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:53.223253   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:53.223260   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:53.223265   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:53.225920   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:53.723831   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:53.723851   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:53.723860   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:53.723863   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:53.726703   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:54.224088   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:54.224107   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:54.224115   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:54.224118   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:54.226780   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:54.227361   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:54.723999   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:54.724019   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:54.724027   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:54.724036   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:54.728305   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:55.223232   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:55.223257   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:55.223265   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:55.223269   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:55.225822   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:55.724026   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:55.724049   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:55.724057   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:55.724062   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:55.727072   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:56.223877   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:56.223898   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:56.223912   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:56.223916   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:56.226509   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:56.723365   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:56.723387   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:56.723395   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:56.723399   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:56.726446   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:56.727019   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:57.223210   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:57.223228   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:57.223237   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:57.223242   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:57.225607   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:57.723489   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:57.723509   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:57.723518   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:57.723522   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:57.726165   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:58.224115   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:58.224142   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:58.224153   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.224157   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.226597   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:58.723496   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:58.723515   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:58.723523   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.723530   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.726205   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:59.224032   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:59.224052   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:59.224060   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:59.224064   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:59.226708   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:59.227328   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:59.723865   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:59.723887   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:59.723895   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:59.723898   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:59.726433   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:00.223224   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:00.223245   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:00.223255   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:00.223260   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:00.225857   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:00.723626   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:00.723652   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:00.723661   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:00.723666   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:00.726165   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:01.223617   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:01.223643   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:01.223654   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:01.223661   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:01.226347   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:01.724239   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:01.724263   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:01.724273   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:01.724280   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:01.727378   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:01.727905   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:02.223152   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:02.223173   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:02.223181   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:02.223184   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:02.225887   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:02.723842   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:02.723872   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:02.723881   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:02.723886   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:02.726580   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:03.223540   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:03.223560   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:03.223568   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:03.223573   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:03.226137   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:03.724092   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:03.724115   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:03.724123   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:03.724130   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:03.726966   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:04.224077   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:04.224112   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:04.224130   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:04.224135   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:04.226790   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:04.227365   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:04.723906   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:04.723926   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:04.723934   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:04.723939   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:04.726497   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:05.223270   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:05.223294   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:05.223304   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:05.223311   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:05.225812   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:05.723595   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:05.723618   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:05.723626   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:05.723630   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:05.726445   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:06.223346   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:06.223372   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:06.223380   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:06.223384   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:06.225895   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:06.723788   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:06.723807   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:06.723815   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:06.723820   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:06.726545   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:06.727056   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:07.223292   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:07.223311   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:07.223319   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:07.223323   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:07.225982   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:07.723896   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:07.723924   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:07.723936   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:07.723943   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:07.726547   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:08.224121   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:08.224143   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:08.224150   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:08.224153   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:08.226570   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:08.723223   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:08.723243   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:08.723252   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:08.723255   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:08.726077   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:09.223947   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:09.223972   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:09.223980   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:09.223987   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:09.226816   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:09.227340   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:09.723981   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:09.724002   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:09.724010   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:09.724013   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:09.726901   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:10.223381   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:10.223401   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:10.223409   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:10.223413   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:10.226251   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:10.723998   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:10.724022   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:10.724031   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:10.724039   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:10.726605   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:11.223184   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:11.223203   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:11.223211   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:11.223221   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:11.225824   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:11.723612   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:11.723632   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:11.723640   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:11.723648   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:11.726455   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:11.726948   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:12.223255   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:12.223278   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:12.223287   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:12.223291   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:12.226201   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:12.724060   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:12.724079   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:12.724087   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:12.724090   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:12.726725   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:13.223507   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:13.223531   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:13.223542   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:13.223548   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:13.226403   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:13.723243   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:13.723264   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:13.723271   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:13.723275   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:13.726009   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:14.223889   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:14.223918   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:14.223928   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:14.223932   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:14.226364   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:14.226853   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:14.723299   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:14.723322   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:14.723330   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:14.723334   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:14.725924   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:15.223800   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:15.223821   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:15.223829   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:15.223834   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:15.226539   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:15.723428   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:15.723447   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:15.723455   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:15.723460   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:15.726258   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:16.224161   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:16.224182   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:16.224192   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:16.224197   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:16.227001   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:16.227548   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:16.723969   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:16.723991   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:16.723999   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:16.724004   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:16.726839   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:17.223345   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:17.223365   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:17.223373   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:17.223377   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:17.226010   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:17.723698   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:17.723718   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:17.723726   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:17.723733   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:17.726410   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:18.223220   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:18.223239   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:18.223246   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:18.223249   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:18.225859   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:18.723764   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:18.723789   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:18.723797   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:18.723802   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:18.726495   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:18.727013   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:19.223301   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:19.223322   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:19.223329   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:19.223333   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:19.226117   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:19.724078   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:19.724099   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:19.724107   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:19.724115   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:19.726978   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:20.223847   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:20.223875   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:20.223883   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:20.223887   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:20.226871   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:20.723656   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:20.723676   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:20.723684   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:20.723688   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:20.726302   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:21.224203   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:21.224224   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:21.224232   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:21.224240   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:21.226516   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:21.227012   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:21.723971   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:21.723991   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:21.723998   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:21.724002   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:21.726538   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:22.223405   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:22.223425   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:22.223433   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:22.223437   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:22.226024   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:22.723921   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:22.723941   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:22.723949   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:22.723953   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:22.726717   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.223546   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:23.223569   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.223581   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.223587   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.226207   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.226800   58299 node_ready.go:49] node "ha-107957-m03" has status "Ready":"True"
	I0916 10:39:23.226836   58299 node_ready.go:38] duration metric: took 40.003852399s for node "ha-107957-m03" to be "Ready" ...
	I0916 10:39:23.226850   58299 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:39:23.226951   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:23.226964   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.226974   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.226979   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.232388   58299 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:39:23.240725   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.240829   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:39:23.240842   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.240852   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.240862   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.243164   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.243724   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:23.243741   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.243749   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.243754   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.246015   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.246557   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.246578   58299 pod_ready.go:82] duration metric: took 5.825605ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.246588   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.246665   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:39:23.246675   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.246684   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.246692   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.248794   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.249458   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:23.249473   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.249480   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.249483   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.251434   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.251951   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.251972   58299 pod_ready.go:82] duration metric: took 5.374978ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.251984   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.252052   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:39:23.252063   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.252073   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.252080   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.254302   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.254805   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:23.254820   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.254828   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.254833   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.256635   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.257058   58299 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.257076   58299 pod_ready.go:82] duration metric: took 5.085871ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.257085   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.257136   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:39:23.257144   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.257150   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.257155   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.258999   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.259516   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:23.259533   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.259540   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.259544   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.261302   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.261746   58299 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.261761   58299 pod_ready.go:82] duration metric: took 4.671567ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.261771   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.424168   58299 request.go:632] Waited for 162.31858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:39:23.424228   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:39:23.424234   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.424241   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.424246   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.426762   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.623726   58299 request.go:632] Waited for 196.272148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:23.623803   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:23.623813   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.623820   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.623824   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.626384   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.626824   58299 pod_ready.go:93] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.626842   58299 pod_ready.go:82] duration metric: took 365.065423ms for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.626862   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.824106   58299 request.go:632] Waited for 197.175106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:39:23.824199   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:39:23.824219   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.824233   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.824242   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.827180   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.024129   58299 request.go:632] Waited for 196.356662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:24.024198   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:24.024203   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.024211   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.024216   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.026781   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.027345   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:24.027366   58299 pod_ready.go:82] duration metric: took 400.494229ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.027379   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.224363   58299 request.go:632] Waited for 196.890278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:39:24.224424   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:39:24.224430   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.224438   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.224443   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.227132   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.424123   58299 request.go:632] Waited for 196.366355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:24.424220   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:24.424230   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.424241   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.424247   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.426764   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.427305   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:24.427327   58299 pod_ready.go:82] duration metric: took 399.940426ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.427340   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.624598   58299 request.go:632] Waited for 197.17129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:39:24.624660   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:39:24.624665   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.624673   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.624679   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.627797   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:24.823610   58299 request.go:632] Waited for 195.133821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:24.823673   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:24.823682   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.823692   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.823698   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.826160   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.826579   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:24.826597   58299 pod_ready.go:82] duration metric: took 399.250784ms for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.826608   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.023552   58299 request.go:632] Waited for 196.87134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:39:25.023607   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:39:25.023612   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.023620   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.023623   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.026543   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.223563   58299 request.go:632] Waited for 196.285225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:25.223627   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:25.223632   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.223640   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.223646   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.226158   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.226630   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:25.226650   58299 pod_ready.go:82] duration metric: took 400.034095ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.226663   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.423665   58299 request.go:632] Waited for 196.9218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:39:25.423752   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:39:25.423764   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.423776   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.423782   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.426729   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.623695   58299 request.go:632] Waited for 196.27248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:25.623760   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:25.623770   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.623781   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.623791   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.626267   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.626854   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:25.626875   58299 pod_ready.go:82] duration metric: took 400.203437ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.626892   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.823948   58299 request.go:632] Waited for 196.960808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:39:25.824005   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:39:25.824012   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.824024   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.824034   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.826859   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.023769   58299 request.go:632] Waited for 196.268704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.023845   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.023852   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.023863   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.023871   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.026444   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.026923   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:26.026942   58299 pod_ready.go:82] duration metric: took 400.04067ms for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.026953   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.224044   58299 request.go:632] Waited for 197.015321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:39:26.224111   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:39:26.224123   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.224134   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.224140   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.226759   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.423736   58299 request.go:632] Waited for 196.372075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:26.423822   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:26.423834   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.423843   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.423850   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.426445   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.426998   58299 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:26.427021   58299 pod_ready.go:82] duration metric: took 400.06143ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.427032   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.623931   58299 request.go:632] Waited for 196.824798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:39:26.623990   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:39:26.623997   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.624007   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.624015   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.626765   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.824526   58299 request.go:632] Waited for 197.199798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.824601   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.824612   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.824622   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.824628   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.827165   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.827603   58299 pod_ready.go:93] pod "kube-proxy-f2scr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:26.827622   58299 pod_ready.go:82] duration metric: took 400.581271ms for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.827631   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.023783   58299 request.go:632] Waited for 196.042254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:39:27.023838   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:39:27.023844   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.023851   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.023855   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.026409   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.224334   58299 request.go:632] Waited for 197.34357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:27.224394   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:27.224399   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.224406   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.224410   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.226960   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.227494   58299 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:27.227513   58299 pod_ready.go:82] duration metric: took 399.869121ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.227523   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.423574   58299 request.go:632] Waited for 195.971296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:39:27.423665   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:39:27.423696   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.423705   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.423711   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.426388   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.624349   58299 request.go:632] Waited for 197.361421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:27.624417   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:27.624425   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.624433   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.624436   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.627058   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.627584   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:27.627606   58299 pod_ready.go:82] duration metric: took 400.075507ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.627620   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.823640   58299 request.go:632] Waited for 195.928112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:39:27.823734   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:39:27.823741   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.823751   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.823760   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.826521   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.024401   58299 request.go:632] Waited for 197.354153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:28.024474   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:28.024479   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.024487   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.024490   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.027115   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.027598   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:28.027622   58299 pod_ready.go:82] duration metric: took 399.991808ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:28.027634   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:28.224610   58299 request.go:632] Waited for 196.899296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:39:28.224698   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:39:28.224706   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.224717   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.224727   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.227365   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.424322   58299 request.go:632] Waited for 196.362994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:28.424391   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:28.424399   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.424409   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.424416   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.427305   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.427942   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:28.427968   58299 pod_ready.go:82] duration metric: took 400.324894ms for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:28.427984   58299 pod_ready.go:39] duration metric: took 5.201116236s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:39:28.428018   58299 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:39:28.428111   58299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:39:28.440259   58299 api_server.go:72] duration metric: took 45.650305903s to wait for apiserver process to appear ...
	I0916 10:39:28.440286   58299 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:39:28.440319   58299 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:39:28.445420   58299 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:39:28.445496   58299 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:39:28.445503   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.445511   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.445517   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.446266   58299 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:39:28.446319   58299 api_server.go:141] control plane version: v1.31.1
	I0916 10:39:28.446336   58299 api_server.go:131] duration metric: took 6.043324ms to wait for apiserver health ...
	I0916 10:39:28.446345   58299 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:39:28.623701   58299 request.go:632] Waited for 177.249352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:28.623756   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:28.623761   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.623769   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.623774   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.628677   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:28.635104   58299 system_pods.go:59] 24 kube-system pods found
	I0916 10:39:28.635147   58299 system_pods.go:61] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:39:28.635153   58299 system_pods.go:61] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:39:28.635156   58299 system_pods.go:61] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:39:28.635160   58299 system_pods.go:61] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:39:28.635168   58299 system_pods.go:61] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:39:28.635172   58299 system_pods.go:61] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:39:28.635175   58299 system_pods.go:61] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:39:28.635179   58299 system_pods.go:61] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:39:28.635183   58299 system_pods.go:61] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:39:28.635187   58299 system_pods.go:61] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:39:28.635192   58299 system_pods.go:61] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:39:28.635197   58299 system_pods.go:61] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:39:28.635203   58299 system_pods.go:61] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:39:28.635206   58299 system_pods.go:61] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:39:28.635209   58299 system_pods.go:61] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:39:28.635212   58299 system_pods.go:61] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:39:28.635215   58299 system_pods.go:61] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:39:28.635221   58299 system_pods.go:61] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:39:28.635226   58299 system_pods.go:61] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:39:28.635229   58299 system_pods.go:61] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:39:28.635234   58299 system_pods.go:61] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:39:28.635237   58299 system_pods.go:61] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:39:28.635242   58299 system_pods.go:61] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:39:28.635246   58299 system_pods.go:61] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:39:28.635252   58299 system_pods.go:74] duration metric: took 188.899196ms to wait for pod list to return data ...
	I0916 10:39:28.635261   58299 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:39:28.823594   58299 request.go:632] Waited for 188.251858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:39:28.823646   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:39:28.823651   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.823658   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.823662   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.826719   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:28.826847   58299 default_sa.go:45] found service account: "default"
	I0916 10:39:28.826861   58299 default_sa.go:55] duration metric: took 191.593552ms for default service account to be created ...
	I0916 10:39:28.826868   58299 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:39:29.024332   58299 request.go:632] Waited for 197.387174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:29.024398   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:29.024406   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:29.024416   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:29.024430   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:29.029545   58299 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:39:29.035853   58299 system_pods.go:86] 24 kube-system pods found
	I0916 10:39:29.035881   58299 system_pods.go:89] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:39:29.035888   58299 system_pods.go:89] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:39:29.035892   58299 system_pods.go:89] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:39:29.035896   58299 system_pods.go:89] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:39:29.035901   58299 system_pods.go:89] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:39:29.035905   58299 system_pods.go:89] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:39:29.035910   58299 system_pods.go:89] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:39:29.035914   58299 system_pods.go:89] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:39:29.035918   58299 system_pods.go:89] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:39:29.035922   58299 system_pods.go:89] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:39:29.035925   58299 system_pods.go:89] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:39:29.035929   58299 system_pods.go:89] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:39:29.035933   58299 system_pods.go:89] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:39:29.035937   58299 system_pods.go:89] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:39:29.035941   58299 system_pods.go:89] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:39:29.035944   58299 system_pods.go:89] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:39:29.035948   58299 system_pods.go:89] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:39:29.035951   58299 system_pods.go:89] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:39:29.035954   58299 system_pods.go:89] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:39:29.035958   58299 system_pods.go:89] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:39:29.035961   58299 system_pods.go:89] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:39:29.035966   58299 system_pods.go:89] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:39:29.035969   58299 system_pods.go:89] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:39:29.035972   58299 system_pods.go:89] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:39:29.035979   58299 system_pods.go:126] duration metric: took 209.105667ms to wait for k8s-apps to be running ...
	I0916 10:39:29.035996   58299 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:39:29.036044   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:39:29.046827   58299 system_svc.go:56] duration metric: took 10.82024ms WaitForService to wait for kubelet
	I0916 10:39:29.046857   58299 kubeadm.go:582] duration metric: took 46.256910268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:39:29.046891   58299 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:39:29.224236   58299 request.go:632] Waited for 177.251294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:39:29.224304   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:39:29.224314   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:29.224323   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:29.224332   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:29.227796   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:29.228723   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:39:29.228764   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:39:29.228795   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:39:29.228801   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:39:29.228807   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:39:29.228813   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:39:29.228822   58299 node_conditions.go:105] duration metric: took 181.924487ms to run NodePressure ...
	I0916 10:39:29.228842   58299 start.go:241] waiting for startup goroutines ...
	I0916 10:39:29.228872   58299 start.go:255] writing updated cluster config ...
	I0916 10:39:29.229288   58299 ssh_runner.go:195] Run: rm -f paused
	I0916 10:39:29.236462   58299 out.go:177] * Done! kubectl is now configured to use "ha-107957" cluster and "default" namespace by default
	E0916 10:39:29.237717   58299 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:37:36 ha-107957 crio[1034]: time="2024-09-16 10:37:36.244431248Z" level=info msg="Created container 2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a: kube-system/coredns-7c65d6cfc9-mhp28/coredns" id=1bdfb496-6253-422a-af30-dc700b4b48bd name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:37:36 ha-107957 crio[1034]: time="2024-09-16 10:37:36.244970827Z" level=info msg="Starting container: 2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a" id=b7193082-2d26-4f45-9b05-5daccd29ccd3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:37:36 ha-107957 crio[1034]: time="2024-09-16 10:37:36.299215725Z" level=info msg="Started container" PID=2390 containerID=2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a description=kube-system/coredns-7c65d6cfc9-mhp28/coredns id=b7193082-2d26-4f45-9b05-5daccd29ccd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7174b9a3e70964062e8b18263b30732ccbb5b458d5b4b2a807bbda9cdd79b329
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.433727831Z" level=info msg="Running pod sandbox: default/busybox-7dff88458-m2jh6/POD" id=5e156903-b707-458c-ad93-55d4d43a105f name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.433820948Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.449846263Z" level=info msg="Got pod network &{Name:busybox-7dff88458-m2jh6 Namespace:default ID:710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843 UID:a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8 NetNS:/var/run/netns/d062af54-eab5-468c-8a07-0a5ecd9b1c93 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.449882965Z" level=info msg="Adding pod default_busybox-7dff88458-m2jh6 to CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.462648174Z" level=info msg="Got pod network &{Name:busybox-7dff88458-m2jh6 Namespace:default ID:710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843 UID:a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8 NetNS:/var/run/netns/d062af54-eab5-468c-8a07-0a5ecd9b1c93 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.462769789Z" level=info msg="Checking pod default_busybox-7dff88458-m2jh6 for CNI network kindnet (type=ptp)"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.465905378Z" level=info msg="Ran pod sandbox 710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843 with infra container: default/busybox-7dff88458-m2jh6/POD" id=5e156903-b707-458c-ad93-55d4d43a105f name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.467141808Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=937ca887-d235-42f7-a574-a480e24f85f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.467375671Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=937ca887-d235-42f7-a574-a480e24f85f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.468063188Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=aac48b61-89d2-456c-a77c-6ad2faaf9158 name=/runtime.v1.ImageService/PullImage
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.469110073Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:39:31 ha-107957 crio[1034]: time="2024-09-16 10:39:31.299742855Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.599930304Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=aac48b61-89d2-456c-a77c-6ad2faaf9158 name=/runtime.v1.ImageService/PullImage
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.600660737Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=9d39e8d1-6625-49c6-8a4c-12b60b1f5501 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.601278863Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9d39e8d1-6625-49c6-8a4c-12b60b1f5501 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.602624202Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c60851b0-ce9f-4127-bbb0-fdaca731deaa name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.603317032Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c60851b0-ce9f-4127-bbb0-fdaca731deaa name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.604260689Z" level=info msg="Creating container: default/busybox-7dff88458-m2jh6/busybox" id=3284e0d5-cc54-445e-9942-8534f7174e52 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.604377206Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.667673390Z" level=info msg="Created container 861381147b229f211fe3711140a60ff3444297d9705cd5049aa5576eef625468: default/busybox-7dff88458-m2jh6/busybox" id=3284e0d5-cc54-445e-9942-8534f7174e52 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.668454910Z" level=info msg="Starting container: 861381147b229f211fe3711140a60ff3444297d9705cd5049aa5576eef625468" id=8a6d1b44-8038-4396-ba62-ff83c05cdf8e name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.674514851Z" level=info msg="Started container" PID=2641 containerID=861381147b229f211fe3711140a60ff3444297d9705cd5049aa5576eef625468 description=default/busybox-7dff88458-m2jh6/busybox id=8a6d1b44-8038-4396-ba62-ff83c05cdf8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	861381147b229       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   36 seconds ago      Running             busybox                   0                   710de54c88a1b       busybox-7dff88458-m2jh6
	2812c05cbb819       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago       Running             coredns                   0                   7174b9a3e7096       coredns-7c65d6cfc9-mhp28
	6d2579e1933da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       0                   4f87c81927aed       storage-provisioner
	e70b0d4efee19       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago       Running             coredns                   0                   4993c49192681       coredns-7c65d6cfc9-t9xdr
	961b9339405b0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago       Running             kube-proxy                0                   e9b91b2749be8       kube-proxy-5ctr8
	70b5c5b4e1dc3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago       Running             kindnet-cni               0                   b4bf04ff45396       kindnet-rwcs2
	77ff8efc10fe1       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     2 minutes ago       Running             kube-vip                  0                   25ff40ebef580       kube-vip-ha-107957
	5962366f88b6f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago       Running             kube-scheduler            0                   dcd27af89531d       kube-scheduler-ha-107957
	b1d6cc64c9b2c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago       Running             kube-apiserver            0                   1adf66d5a6d51       kube-apiserver-ha-107957
	7e57abaf77dbc       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago       Running             kube-controller-manager   0                   774dc2301fff2       kube-controller-manager-ha-107957
	2481bf9216b4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago       Running             etcd                      0                   194127e61d89d       etcd-ha-107957
	
	
	==> coredns [2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a] <==
	[INFO] 10.244.2.2:46793 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004296567s
	[INFO] 10.244.2.2:43063 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153947s
	[INFO] 10.244.2.2:46086 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0169607s
	[INFO] 10.244.2.2:54094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144006s
	[INFO] 10.244.0.4:44197 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122548s
	[INFO] 10.244.0.4:51311 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002080567s
	[INFO] 10.244.0.4:43617 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078863s
	[INFO] 10.244.1.2:53583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153821s
	[INFO] 10.244.1.2:42615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001661333s
	[INFO] 10.244.1.2:39797 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086687s
	[INFO] 10.244.1.2:54605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151286s
	[INFO] 10.244.2.2:43370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00023735s
	[INFO] 10.244.2.2:41422 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100456s
	[INFO] 10.244.2.2:39218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108926s
	[INFO] 10.244.0.4:60314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082915s
	[INFO] 10.244.1.2:41042 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137109s
	[INFO] 10.244.1.2:48817 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116903s
	[INFO] 10.244.1.2:45958 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088746s
	[INFO] 10.244.2.2:54916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157262s
	[INFO] 10.244.2.2:42021 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148398s
	[INFO] 10.244.2.2:48014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123643s
	[INFO] 10.244.2.2:38833 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108016s
	[INFO] 10.244.0.4:41677 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128554s
	[INFO] 10.244.0.4:54618 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075484s
	[INFO] 10.244.1.2:42614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000081244s
	
	
	==> coredns [e70b0d4efee19ff2bd834f86c91dd591952f5e8561c4f155b13c60ed04c3210a] <==
	[INFO] 10.244.2.2:37520 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010271283s
	[INFO] 10.244.0.4:34086 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118588s
	[INFO] 10.244.0.4:50400 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000087455s
	[INFO] 10.244.2.2:33492 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158642s
	[INFO] 10.244.2.2:40447 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163329s
	[INFO] 10.244.2.2:55339 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136729s
	[INFO] 10.244.0.4:34312 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164115s
	[INFO] 10.244.0.4:44393 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106766s
	[INFO] 10.244.0.4:54524 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587719s
	[INFO] 10.244.0.4:37539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078422s
	[INFO] 10.244.0.4:55884 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090481s
	[INFO] 10.244.1.2:56325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001976108s
	[INFO] 10.244.1.2:35999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097425s
	[INFO] 10.244.1.2:58242 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085904s
	[INFO] 10.244.1.2:39966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080925s
	[INFO] 10.244.2.2:44398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014975s
	[INFO] 10.244.0.4:57559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149084s
	[INFO] 10.244.0.4:37522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057221s
	[INFO] 10.244.0.4:32815 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145244s
	[INFO] 10.244.1.2:43015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152009s
	[INFO] 10.244.0.4:33260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000146393s
	[INFO] 10.244.0.4:45907 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011862s
	[INFO] 10.244.1.2:41436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136888s
	[INFO] 10.244.1.2:56800 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131119s
	[INFO] 10.244.1.2:48525 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098212s
	
	
	==> describe nodes <==
	Name:               ha-107957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-107957
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 82180a11932f4b1fb524fbc706471f86
	  System UUID:                4b3cbb31-41b2-4aeb-852f-1a17b0b6a69f
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m2jh6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 coredns-7c65d6cfc9-mhp28             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m46s
	  kube-system                 coredns-7c65d6cfc9-t9xdr             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m46s
	  kube-system                 etcd-ha-107957                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m51s
	  kube-system                 kindnet-rwcs2                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m46s
	  kube-system                 kube-apiserver-ha-107957             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-controller-manager-ha-107957    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-proxy-5ctr8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  kube-system                 kube-scheduler-ha-107957             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-vip-ha-107957                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m45s  kube-proxy       
	  Normal   Starting                 2m51s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m51s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m51s  kubelet          Node ha-107957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m51s  kubelet          Node ha-107957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m51s  kubelet          Node ha-107957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m47s  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   NodeReady                2m35s  kubelet          Node ha-107957 status is now: NodeReady
	  Normal   RegisteredNode           2m25s  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           82s    node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	
	
	Name:               ha-107957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:39:40 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:39:40 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:39:40 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:39:40 +0000   Mon, 16 Sep 2024 10:38:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-107957-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec7affb130534e45ba6df09cacc0853b
	  System UUID:                15471af5-ad40-4515-bf0c-79f0cc3f164e
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-plmdj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 etcd-ha-107957-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m32s
	  kube-system                 kindnet-sjkjx                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m33s
	  kube-system                 kube-apiserver-ha-107957-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-ha-107957-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-qtxh9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-scheduler-ha-107957-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-vip-ha-107957-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s (x7 over 2m33s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m32s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal  RegisteredNode           2m25s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal  RegisteredNode           82s                    node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	
	
	Name:               ha-107957-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:38:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:38:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:38:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:39:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-107957-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 e81e765d559f41f895dd17c226607233
	  System UUID:                66298d02-b2ec-4333-986a-47e548dee112
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rfjs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 etcd-ha-107957-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         89s
	  kube-system                 kindnet-rcsxv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-ha-107957-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-ha-107957-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-f2scr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-ha-107957-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-vip-ha-107957-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 87s                kube-proxy       
	  Normal  RegisteredNode           90s                node-controller  Node ha-107957-m03 event: Registered Node ha-107957-m03 in Controller
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node ha-107957-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node ha-107957-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)  kubelet          Node ha-107957-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           87s                node-controller  Node ha-107957-m03 event: Registered Node ha-107957-m03 in Controller
	  Normal  RegisteredNode           82s                node-controller  Node ha-107957-m03 event: Registered Node ha-107957-m03 in Controller
	
	
	Name:               ha-107957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:39:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:39:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:39:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:40:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-107957-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 71accbb4b2bc4cd5b4c754c38afdb6f6
	  System UUID:                85f6a07b-6b9f-43fc-98ae-305e46935522
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4lkzl       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-proxy-hm8zn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18s                kube-proxy       
	  Normal   RegisteredNode           20s                node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  20s (x2 over 20s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x2 over 20s)  kubelet          Node ha-107957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x2 over 20s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17s                node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           17s                node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   NodeReady                8s                 kubelet          Node ha-107957-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 10:35] FS-Cache: Duplicate cookie detected
	[  +0.005031] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000007485c404{9P.session} n=000000002b39a795
	[  +0.007541] FS-Cache: O-key=[10] '34323935313533303732'
	[  +0.005370] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.006617] FS-Cache: N-cookie d=000000007485c404{9P.session} n=00000000364f9863
	[  +0.008939] FS-Cache: N-key=[10] '34323935313533303732'
	[ +14.884982] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [2481bf9216b4b36d1f0f3dd6f17b92cfbfc43b6eebff3f320009c9f040ead512] <==
	{"level":"info","ts":"2024-09-16T10:38:40.425420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 12748002774085638657) learners=(97581390330336645)"}
	{"level":"info","ts":"2024-09-16T10:38:40.425680Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"15aadc1eb541585","added-peer-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:38:40.425721Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:40.425758Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:40.426899Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:40.426941Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585","remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:38:40.426982Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:40.426997Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:40.427008Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:40.427204Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"warn","ts":"2024-09-16T10:38:40.960036Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"15aadc1eb541585","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-16T10:38:41.206733Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:41.214421Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:41.214525Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:41.226265Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"15aadc1eb541585","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:38:41.226320Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:41.234163Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"15aadc1eb541585","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:38:41.234207Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:38:41.495920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(97581390330336645 12593026477526642892 12748002774085638657)"}
	{"level":"info","ts":"2024-09-16T10:38:41.496027Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:38:41.496066Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:39:41.180189Z","caller":"traceutil/trace.go:171","msg":"trace[1418583242] transaction","detail":"{read_only:false; response_revision:1062; number_of_response:1; }","duration":"125.462592ms","start":"2024-09-16T10:39:41.054707Z","end":"2024-09-16T10:39:41.180170Z","steps":["trace[1418583242] 'process raft request'  (duration: 125.342248ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:39:41.971548Z","caller":"traceutil/trace.go:171","msg":"trace[1015336787] transaction","detail":"{read_only:false; response_revision:1065; number_of_response:1; }","duration":"126.822977ms","start":"2024-09-16T10:39:41.844705Z","end":"2024-09-16T10:39:41.971528Z","steps":["trace[1015336787] 'process raft request'  (duration: 126.705458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:39:42.668052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.952239ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:39:42.668282Z","caller":"traceutil/trace.go:171","msg":"trace[1603374473] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1067; }","duration":"159.211251ms","start":"2024-09-16T10:39:42.509052Z","end":"2024-09-16T10:39:42.668264Z","steps":["trace[1603374473] 'range keys from in-memory index tree'  (duration: 158.93554ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:40:10 up 22 min,  0 users,  load average: 1.19, 0.84, 0.52
	Linux ha-107957 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [70b5c5b4e1dc30a22cf6cb15f81f3a486629e5aed5aca6e9dd70ad00dcc0acf4] <==
	I0916 10:39:35.594723       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:39:35.594738       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:39:45.595399       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:39:45.595435       1 main.go:299] handling current node
	I0916 10:39:45.595449       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:39:45.595454       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:39:45.595572       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:39:45.595580       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:39:55.594536       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:39:55.594578       1 main.go:299] handling current node
	I0916 10:39:55.594602       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:39:55.594610       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:39:55.594799       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:39:55.594813       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:39:55.594874       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:39:55.594886       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:39:55.594935       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0916 10:40:05.593806       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:40:05.593852       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:40:05.594006       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:05.594019       1 main.go:299] handling current node
	I0916 10:40:05.594033       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:40:05.594037       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:40:05.594083       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:40:05.594087       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [b1d6cc64c9b2c6f964d9cfedd269b3427f97e09a546dab8177407bdf75af651a] <==
	I0916 10:37:18.323387       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:37:18.332803       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:37:18.334005       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:37:18.338869       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:37:18.735773       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:37:19.463510       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:37:19.474882       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:37:19.665941       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:37:24.237857       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:37:24.286933       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 10:39:35.076855       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48050: use of closed network connection
	E0916 10:39:35.228839       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48058: use of closed network connection
	E0916 10:39:35.384883       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48074: use of closed network connection
	E0916 10:39:35.574724       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48098: use of closed network connection
	E0916 10:39:35.730342       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48106: use of closed network connection
	E0916 10:39:35.886083       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48116: use of closed network connection
	E0916 10:39:36.040362       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48126: use of closed network connection
	E0916 10:39:36.189038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48136: use of closed network connection
	E0916 10:39:36.336733       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48156: use of closed network connection
	E0916 10:39:36.602543       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48174: use of closed network connection
	E0916 10:39:36.750671       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48184: use of closed network connection
	E0916 10:39:36.899981       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48202: use of closed network connection
	E0916 10:39:37.053525       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48222: use of closed network connection
	E0916 10:39:37.213471       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48248: use of closed network connection
	E0916 10:39:37.363232       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48270: use of closed network connection
	
	
	==> kube-controller-manager [7e57abaf77dbcd8ae424e058d867ae32d9eebd67469026700eb14494673d5bd9] <==
	I0916 10:39:34.017180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.199338ms"
	I0916 10:39:34.017289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.262µs"
	I0916 10:39:34.504215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.738363ms"
	I0916 10:39:34.504327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.542µs"
	I0916 10:39:40.062243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m02"
	I0916 10:39:41.752924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m03"
	E0916 10:39:50.271978       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-csg8t failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-csg8t\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 10:39:50.419304       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-107957-m04\" does not exist"
	I0916 10:39:50.439738       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-107957-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:39:50.439781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:50.440740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:50.839625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:50.949734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:51.088693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:52.349400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957"
	I0916 10:39:53.254355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:53.360861       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:53.478084       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-107957-m04"
	I0916 10:39:53.479190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:53.541957       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:00.658417       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:02.371311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-107957-m04"
	I0916 10:40:02.371722       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:02.384368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:03.265953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	
	
	==> kube-proxy [961b9339405b05241fd3024c31a7114d64af8103178defd87467d05e162333dd] <==
	I0916 10:37:25.031622       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:37:25.222101       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:37:25.222169       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:37:25.243893       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:37:25.243973       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:37:25.245955       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:37:25.246245       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:37:25.246273       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:37:25.247638       1 config.go:199] "Starting service config controller"
	I0916 10:37:25.247684       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:37:25.248012       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:37:25.248043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:37:25.248076       1 config.go:328] "Starting node config controller"
	I0916 10:37:25.248081       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:37:25.348839       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:37:25.348869       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:37:25.348888       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5962366f88b6f02c398ff89c07e8f8193763da0e0ff16d3f31f2f8e5d57c573b] <==
	W0916 10:37:16.811838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:37:16.811861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.630129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:37:17.630178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.670046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:37:17.670093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.676785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:37:17.676828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.792440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:37:17.792492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.864545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:37:17.864602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:37:18.407967       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:38:40.365719       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-62bx2\": pod kube-proxy-62bx2 is already assigned to node \"ha-107957-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-62bx2" node="ha-107957-m03"
	E0916 10:38:40.365840       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b04f58c1-710b-4602-88c4-ce46ad218d6a(kube-system/kube-proxy-62bx2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-62bx2"
	E0916 10:38:40.365867       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-62bx2\": pod kube-proxy-62bx2 is already assigned to node \"ha-107957-m03\"" pod="kube-system/kube-proxy-62bx2"
	I0916 10:38:40.365891       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-62bx2" node="ha-107957-m03"
	E0916 10:38:40.370067       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7bkf8\": pod kindnet-7bkf8 is already assigned to node \"ha-107957-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-7bkf8" node="ha-107957-m03"
	E0916 10:38:40.370228       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d577df2e-0955-4d71-ad76-410167df4a18(kube-system/kindnet-7bkf8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7bkf8"
	E0916 10:38:40.370258       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7bkf8\": pod kindnet-7bkf8 is already assigned to node \"ha-107957-m03\"" pod="kube-system/kindnet-7bkf8"
	I0916 10:38:40.370283       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7bkf8" node="ha-107957-m03"
	E0916 10:39:50.454329       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hm8zn\": pod kube-proxy-hm8zn is already assigned to node \"ha-107957-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hm8zn" node="ha-107957-m04"
	E0916 10:39:50.454395       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6ea6916e-f34c-42b3-996b-033915687fd1(kube-system/kube-proxy-hm8zn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hm8zn"
	E0916 10:39:50.454412       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hm8zn\": pod kube-proxy-hm8zn is already assigned to node \"ha-107957-m04\"" pod="kube-system/kube-proxy-hm8zn"
	I0916 10:39:50.454434       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hm8zn" node="ha-107957-m04"
	
	
	==> kubelet <==
	Sep 16 10:38:29 ha-107957 kubelet[1727]: E0916 10:38:29.604751    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483109604498297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:39 ha-107957 kubelet[1727]: E0916 10:38:39.606563    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483119606322536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:39 ha-107957 kubelet[1727]: E0916 10:38:39.606611    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483119606322536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:49 ha-107957 kubelet[1727]: E0916 10:38:49.608153    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483129607929725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:49 ha-107957 kubelet[1727]: E0916 10:38:49.608197    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483129607929725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:59 ha-107957 kubelet[1727]: E0916 10:38:59.609978    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483139609803045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:59 ha-107957 kubelet[1727]: E0916 10:38:59.610020    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483139609803045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:09 ha-107957 kubelet[1727]: E0916 10:39:09.611161    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483149610949133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:09 ha-107957 kubelet[1727]: E0916 10:39:09.611205    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483149610949133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:19 ha-107957 kubelet[1727]: E0916 10:39:19.612344    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483159612163714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:19 ha-107957 kubelet[1727]: E0916 10:39:19.612375    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483159612163714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:29 ha-107957 kubelet[1727]: E0916 10:39:29.613939    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483169613692411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:29 ha-107957 kubelet[1727]: E0916 10:39:29.613981    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483169613692411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:30 ha-107957 kubelet[1727]: I0916 10:39:30.195838    1727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fw56\" (UniqueName: \"kubernetes.io/projected/a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8-kube-api-access-2fw56\") pod \"busybox-7dff88458-m2jh6\" (UID: \"a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8\") " pod="default/busybox-7dff88458-m2jh6"
	Sep 16 10:39:33 ha-107957 kubelet[1727]: I0916 10:39:33.778047    1727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-m2jh6" podStartSLOduration=0.644047761 podStartE2EDuration="3.778021401s" podCreationTimestamp="2024-09-16 10:39:30 +0000 UTC" firstStartedPulling="2024-09-16 10:39:30.467563681 +0000 UTC m=+131.047799538" lastFinishedPulling="2024-09-16 10:39:33.601537309 +0000 UTC m=+134.181773178" observedRunningTime="2024-09-16 10:39:33.777923058 +0000 UTC m=+134.358158932" watchObservedRunningTime="2024-09-16 10:39:33.778021401 +0000 UTC m=+134.358257278"
	Sep 16 10:39:35 ha-107957 kubelet[1727]: E0916 10:39:35.228848    1727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54902->127.0.0.1:43613: write tcp 127.0.0.1:54902->127.0.0.1:43613: write: broken pipe
	Sep 16 10:39:36 ha-107957 kubelet[1727]: E0916 10:39:36.899930    1727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54924->127.0.0.1:43613: write tcp 127.0.0.1:54924->127.0.0.1:43613: write: broken pipe
	Sep 16 10:39:39 ha-107957 kubelet[1727]: E0916 10:39:39.615163    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483179614951012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:39 ha-107957 kubelet[1727]: E0916 10:39:39.615201    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483179614951012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:49 ha-107957 kubelet[1727]: E0916 10:39:49.616373    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483189616178160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:49 ha-107957 kubelet[1727]: E0916 10:39:49.616411    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483189616178160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:59 ha-107957 kubelet[1727]: E0916 10:39:59.617714    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483199617503803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:59 ha-107957 kubelet[1727]: E0916 10:39:59.617758    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483199617503803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:09 ha-107957 kubelet[1727]: E0916 10:40:09.619148    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483209618940894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:09 ha-107957 kubelet[1727]: E0916 10:40:09.619193    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483209618940894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-107957 -n ha-107957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (469.501µs)
helpers_test.go:263: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/NodeLabels (2.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 node start m02 -v=7 --alsologtostderr
E0916 10:40:43.421488   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 node start m02 -v=7 --alsologtostderr: (20.589832757s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (514.902µs)
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-107957
helpers_test.go:235: (dbg) docker inspect ha-107957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd",
	        "Created": "2024-09-16T10:37:05.006225665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 58964,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:37:05.118823416Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hosts",
	        "LogPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd-json.log",
	        "Name": "/ha-107957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-107957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-107957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-107957",
	                "Source": "/var/lib/docker/volumes/ha-107957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-107957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-107957",
	                "name.minikube.sigs.k8s.io": "ha-107957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1596d8f3a177074ac09c8b8ac92b313e5c035ff2701330f9d1b9b910d34ca9b",
	            "SandboxKey": "/var/run/docker/netns/f1596d8f3a17",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-107957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1162a04f8fb0eca4f56c515332b1b6b72501106e380521da303a5999505b78f5",
	                    "EndpointID": "6fab7b78e88e07ed9e169eb5c488f69225a0919e60c622ad643d4f3c5da0293c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-107957",
	                        "8934c54a2cf0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-107957 -n ha-107957
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 logs -n 25: (1.314484407s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957:/home/docker/cp-test_ha-107957-m03_ha-107957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957 sudo cat                                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m03_ha-107957.txt                                |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m02:/home/docker/cp-test_ha-107957-m03_ha-107957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m02 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m03_ha-107957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04:/home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m04 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp testdata/cp-test.txt                                               | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile432092999/001/cp-test_ha-107957-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957:/home/docker/cp-test_ha-107957-m04_ha-107957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957 sudo cat                                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957.txt                                |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m02:/home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m02 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03:/home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m03 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-107957 node stop m02 -v=7                                                    | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-107957 node start m02 -v=7                                                   | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:41 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:36:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:36:59.603398   58299 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:59.603689   58299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:59.603701   58299 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:59.603706   58299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:59.603926   58299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:36:59.604506   58299 out.go:352] Setting JSON to false
	I0916 10:36:59.605423   58299 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1160,"bootTime":1726481860,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:36:59.605545   58299 start.go:139] virtualization: kvm guest
	I0916 10:36:59.607783   58299 out.go:177] * [ha-107957] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:36:59.609154   58299 notify.go:220] Checking for updates...
	I0916 10:36:59.609171   58299 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:36:59.610814   58299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:59.612398   58299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:36:59.613838   58299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:36:59.615490   58299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:36:59.617049   58299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:59.618738   58299 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:59.642219   58299 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:36:59.642367   58299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:36:59.695784   58299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:36:59.683210757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:36:59.695892   58299 docker.go:318] overlay module found
	I0916 10:36:59.697854   58299 out.go:177] * Using the docker driver based on user configuration
	I0916 10:36:59.699133   58299 start.go:297] selected driver: docker
	I0916 10:36:59.699150   58299 start.go:901] validating driver "docker" against <nil>
	I0916 10:36:59.699162   58299 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:59.699956   58299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:36:59.752267   58299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:36:59.740856159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:36:59.752512   58299 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:36:59.752832   58299 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:36:59.754857   58299 out.go:177] * Using Docker driver with root privileges
	I0916 10:36:59.756598   58299 cni.go:84] Creating CNI manager for ""
	I0916 10:36:59.756649   58299 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:36:59.756662   58299 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:36:59.756765   58299 start.go:340] cluster config:
	{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:59.758448   58299 out.go:177] * Starting "ha-107957" primary control-plane node in "ha-107957" cluster
	I0916 10:36:59.759759   58299 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:36:59.761144   58299 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:36:59.762275   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:36:59.762316   58299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:36:59.762325   58299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:36:59.762441   58299 cache.go:56] Caching tarball of preloaded images
	I0916 10:36:59.762548   58299 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:36:59.762566   58299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:36:59.763017   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:36:59.763050   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json: {Name:mkc6efad42d7e4a853da28912b65bbd6a7d5e70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:36:59.783435   58299 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:36:59.783453   58299 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:36:59.783516   58299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:36:59.783530   58299 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:36:59.783534   58299 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:36:59.783541   58299 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:36:59.783546   58299 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:36:59.784658   58299 image.go:273] response: 
	I0916 10:36:59.844896   58299 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:36:59.844957   58299 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:36:59.844993   58299 start.go:360] acquireMachinesLock for ha-107957: {Name:mkd47d2ce5dbb0c6b4cd5ea9479cc8820c855026 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:36:59.845117   58299 start.go:364] duration metric: took 103.785µs to acquireMachinesLock for "ha-107957"
	I0916 10:36:59.845144   58299 start.go:93] Provisioning new machine with config: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:36:59.845216   58299 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:36:59.847363   58299 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:36:59.847606   58299 start.go:159] libmachine.API.Create for "ha-107957" (driver="docker")
	I0916 10:36:59.847632   58299 client.go:168] LocalClient.Create starting
	I0916 10:36:59.847693   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:36:59.847724   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:36:59.847736   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:36:59.847777   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:36:59.847798   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:36:59.847808   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:36:59.848117   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:36:59.866348   58299 cli_runner.go:211] docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:36:59.866437   58299 network_create.go:284] running [docker network inspect ha-107957] to gather additional debugging logs...
	I0916 10:36:59.866458   58299 cli_runner.go:164] Run: docker network inspect ha-107957
	W0916 10:36:59.884107   58299 cli_runner.go:211] docker network inspect ha-107957 returned with exit code 1
	I0916 10:36:59.884149   58299 network_create.go:287] error running [docker network inspect ha-107957]: docker network inspect ha-107957: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-107957 not found
	I0916 10:36:59.884164   58299 network_create.go:289] output of [docker network inspect ha-107957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-107957 not found
	
	** /stderr **
	I0916 10:36:59.884296   58299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:36:59.902341   58299 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c8c7b0}
	I0916 10:36:59.902396   58299 network_create.go:124] attempt to create docker network ha-107957 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:36:59.902454   58299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-107957 ha-107957
	I0916 10:36:59.966916   58299 network_create.go:108] docker network ha-107957 192.168.49.0/24 created
	I0916 10:36:59.966962   58299 kic.go:121] calculated static IP "192.168.49.2" for the "ha-107957" container
	I0916 10:36:59.967037   58299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:36:59.983709   58299 cli_runner.go:164] Run: docker volume create ha-107957 --label name.minikube.sigs.k8s.io=ha-107957 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:37:00.007615   58299 oci.go:103] Successfully created a docker volume ha-107957
	I0916 10:37:00.007698   58299 cli_runner.go:164] Run: docker run --rm --name ha-107957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957 --entrypoint /usr/bin/test -v ha-107957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:37:00.506153   58299 oci.go:107] Successfully prepared a docker volume ha-107957
	I0916 10:37:00.506208   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:00.506231   58299 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:37:00.506290   58299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:37:04.940269   58299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.433935277s)
	I0916 10:37:04.940305   58299 kic.go:203] duration metric: took 4.434070761s to extract preloaded images to volume ...
	W0916 10:37:04.940441   58299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:37:04.940563   58299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:37:04.990735   58299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-107957 --name ha-107957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-107957 --network ha-107957 --ip 192.168.49.2 --volume ha-107957:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:37:05.296263   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Running}}
	I0916 10:37:05.314573   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:05.333626   58299 cli_runner.go:164] Run: docker exec ha-107957 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:37:05.375828   58299 oci.go:144] the created container "ha-107957" has a running status.
	I0916 10:37:05.375871   58299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa...
	I0916 10:37:05.604964   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:37:05.605006   58299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:37:05.630238   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:05.652100   58299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:37:05.652120   58299 kic_runner.go:114] Args: [docker exec --privileged ha-107957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:37:05.707244   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:05.730489   58299 machine.go:93] provisionDockerMachine start ...
	I0916 10:37:05.730581   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:05.753671   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:05.753962   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:05.753981   58299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:37:05.952786   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:37:05.952829   58299 ubuntu.go:169] provisioning hostname "ha-107957"
	I0916 10:37:05.952915   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:05.971519   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:05.971759   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:05.971777   58299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957 && echo "ha-107957" | sudo tee /etc/hostname
	I0916 10:37:06.119572   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:37:06.119642   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.136270   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:06.136466   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:06.136489   58299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:37:06.265213   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:37:06.265242   58299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:37:06.265288   58299 ubuntu.go:177] setting up certificates
	I0916 10:37:06.265302   58299 provision.go:84] configureAuth start
	I0916 10:37:06.265385   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:37:06.281894   58299 provision.go:143] copyHostCerts
	I0916 10:37:06.281948   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:06.281984   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:37:06.281996   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:06.282069   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:37:06.282152   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:06.282173   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:37:06.282181   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:06.282208   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:37:06.282261   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:06.282281   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:37:06.282289   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:06.282313   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:37:06.282376   58299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957 san=[127.0.0.1 192.168.49.2 ha-107957 localhost minikube]
	I0916 10:37:06.439846   58299 provision.go:177] copyRemoteCerts
	I0916 10:37:06.439906   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:37:06.439942   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.456642   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:06.549647   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:37:06.549713   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:37:06.570805   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:37:06.570876   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:37:06.592035   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:37:06.592101   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:37:06.613074   58299 provision.go:87] duration metric: took 347.754949ms to configureAuth
	I0916 10:37:06.613106   58299 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:37:06.613293   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:06.613428   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.630199   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:06.630409   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:37:06.630427   58299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:37:06.847080   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:37:06.847108   58299 machine.go:96] duration metric: took 1.116591163s to provisionDockerMachine
	I0916 10:37:06.847121   58299 client.go:171] duration metric: took 6.999482958s to LocalClient.Create
	I0916 10:37:06.847136   58299 start.go:167] duration metric: took 6.999530723s to libmachine.API.Create "ha-107957"
	I0916 10:37:06.847145   58299 start.go:293] postStartSetup for "ha-107957" (driver="docker")
	I0916 10:37:06.847162   58299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:37:06.847232   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:37:06.847272   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:06.864290   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:06.958605   58299 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:37:06.961800   58299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:37:06.961830   58299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:37:06.961838   58299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:37:06.961844   58299 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:37:06.961854   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:37:06.961911   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:37:06.961991   58299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:37:06.962000   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:37:06.962091   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:37:06.970311   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:06.992153   58299 start.go:296] duration metric: took 144.993123ms for postStartSetup
	I0916 10:37:06.992514   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:37:07.010019   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:37:07.010320   58299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:37:07.010374   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:07.027342   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:07.118196   58299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:37:07.122812   58299 start.go:128] duration metric: took 7.277582674s to createHost
	I0916 10:37:07.122838   58299 start.go:83] releasing machines lock for "ha-107957", held for 7.277707937s
	I0916 10:37:07.122897   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:37:07.139939   58299 ssh_runner.go:195] Run: cat /version.json
	I0916 10:37:07.139963   58299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:37:07.139988   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:07.140039   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:07.157654   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:07.157822   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:07.248853   58299 ssh_runner.go:195] Run: systemctl --version
	I0916 10:37:07.327017   58299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:37:07.463377   58299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:37:07.467690   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:07.485312   58299 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:37:07.485399   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:07.511852   58299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:37:07.511876   58299 start.go:495] detecting cgroup driver to use...
	I0916 10:37:07.511915   58299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:37:07.511971   58299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:37:07.525710   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:37:07.536183   58299 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:37:07.536255   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:37:07.548767   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:37:07.561803   58299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:37:07.636189   58299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:37:07.720664   58299 docker.go:233] disabling docker service ...
	I0916 10:37:07.720733   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:37:07.739328   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:37:07.749960   58299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:37:07.828562   58299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:37:07.908170   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:37:07.918586   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:37:07.933088   58299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:37:07.933141   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.942185   58299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:37:07.942257   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.951755   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.960742   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.970406   58299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:37:07.979105   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:07.988477   58299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:08.003007   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:08.011742   58299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:37:08.019640   58299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:37:08.027221   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:08.098376   58299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:37:08.192013   58299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:37:08.192079   58299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:37:08.195597   58299 start.go:563] Will wait 60s for crictl version
	I0916 10:37:08.195647   58299 ssh_runner.go:195] Run: which crictl
	I0916 10:37:08.198745   58299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:37:08.229778   58299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:37:08.229860   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:08.262707   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:08.298338   58299 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:37:08.299827   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:37:08.316399   58299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:37:08.319895   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:08.330759   58299 kubeadm.go:883] updating cluster {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:37:08.330882   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:08.330935   58299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:37:08.392137   58299 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:37:08.392166   58299 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:37:08.392230   58299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:37:08.423229   58299 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:37:08.423250   58299 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:37:08.423257   58299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:37:08.423338   58299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:37:08.423398   58299 ssh_runner.go:195] Run: crio config
	I0916 10:37:08.463060   58299 cni.go:84] Creating CNI manager for ""
	I0916 10:37:08.463079   58299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:37:08.463090   58299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:37:08.463109   58299 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-107957 NodeName:ha-107957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:37:08.463248   58299 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-107957"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:37:08.463271   58299 kube-vip.go:115] generating kube-vip config ...
	I0916 10:37:08.463309   58299 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:37:08.474407   58299 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:37:08.474508   58299 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:37:08.474558   58299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:37:08.482233   58299 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:37:08.482294   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:37:08.489911   58299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0916 10:37:08.505379   58299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:37:08.523137   58299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0916 10:37:08.539035   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 10:37:08.555396   58299 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:37:08.558912   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:08.569471   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:08.643150   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:37:08.655259   58299 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.2
	I0916 10:37:08.655281   58299 certs.go:194] generating shared ca certs ...
	I0916 10:37:08.655302   58299 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.655465   58299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:37:08.655513   58299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:37:08.655526   58299 certs.go:256] generating profile certs ...
	I0916 10:37:08.655584   58299 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:37:08.655612   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt with IP's: []
	I0916 10:37:08.751754   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt ...
	I0916 10:37:08.751786   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt: {Name:mk3ab8542401b8617feb30dcb924978b7ec3a34d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.751954   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key ...
	I0916 10:37:08.751965   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key: {Name:mkc20b79a2c080fec017a4b392198b3d6dc3a922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.752038   58299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3
	I0916 10:37:08.752052   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 10:37:08.885097   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3 ...
	I0916 10:37:08.885134   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3: {Name:mk051112b9fba334b7ed02cba0916716ba024ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.885387   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3 ...
	I0916 10:37:08.885408   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3: {Name:mke96f86839b0890c15fe3dd30fc968634547331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.885544   58299 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.717802a3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:37:08.885793   58299 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.717802a3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:37:08.885880   58299 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:37:08.885904   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt with IP's: []
	I0916 10:37:08.936312   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt ...
	I0916 10:37:08.936345   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt: {Name:mk2a951a04a3eac4ee0442d03ef1c1850492250e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.936526   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key ...
	I0916 10:37:08.936545   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key: {Name:mkdf74098b886a4bb48cd3af60493afa29ff1d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:08.936653   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:37:08.936672   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:37:08.936685   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:37:08.936703   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:37:08.936721   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:37:08.936738   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:37:08.936751   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:37:08.936763   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:37:08.936827   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:37:08.936872   58299 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:37:08.936887   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:37:08.936925   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:37:08.936954   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:37:08.936988   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:37:08.937038   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:08.937089   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:37:08.937111   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:08.937129   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:37:08.937799   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:37:08.959998   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:37:08.982088   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:37:09.004374   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:37:09.025961   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:37:09.046778   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:37:09.068289   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:37:09.090319   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:37:09.112963   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:37:09.134652   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:37:09.156586   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:37:09.178427   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:37:09.194956   58299 ssh_runner.go:195] Run: openssl version
	I0916 10:37:09.199963   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:37:09.208595   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:09.211731   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:09.211780   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:09.217946   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:37:09.226591   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:37:09.235060   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:37:09.238288   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:37:09.238346   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:37:09.244671   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:37:09.252760   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:37:09.261123   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:37:09.264330   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:37:09.264391   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:37:09.270558   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:37:09.279259   58299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:37:09.282271   58299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:37:09.282328   58299 kubeadm.go:392] StartCluster: {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:37:09.282415   58299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:37:09.282463   58299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:37:09.314769   58299 cri.go:89] found id: ""
	I0916 10:37:09.314842   58299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:37:09.322996   58299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:37:09.331095   58299 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:37:09.331144   58299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:37:09.339004   58299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:37:09.339022   58299 kubeadm.go:157] found existing configuration files:
	
	I0916 10:37:09.339060   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:37:09.346697   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:37:09.346759   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:37:09.354349   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:37:09.362136   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:37:09.362191   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:37:09.370301   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:37:09.378467   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:37:09.378518   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:37:09.386401   58299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:37:09.394661   58299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:37:09.394712   58299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:37:09.402526   58299 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:37:09.438918   58299 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:37:09.438991   58299 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:37:09.456400   58299 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:37:09.456489   58299 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:37:09.456543   58299 kubeadm.go:310] OS: Linux
	I0916 10:37:09.456616   58299 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:37:09.456698   58299 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:37:09.456774   58299 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:37:09.456844   58299 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:37:09.456945   58299 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:37:09.457039   58299 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:37:09.457112   58299 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:37:09.457181   58299 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:37:09.457254   58299 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:37:09.509540   58299 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:37:09.509704   58299 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:37:09.509879   58299 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:37:09.515810   58299 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:37:09.518861   58299 out.go:235]   - Generating certificates and keys ...
	I0916 10:37:09.518989   58299 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:37:09.519057   58299 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:37:09.764883   58299 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:37:09.936413   58299 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:37:10.049490   58299 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:37:10.126312   58299 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:37:10.382170   58299 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:37:10.382328   58299 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-107957 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:37:10.563937   58299 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:37:10.564073   58299 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-107957 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:37:10.779144   58299 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:37:10.969132   58299 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:37:11.165366   58299 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:37:11.165487   58299 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:37:11.276973   58299 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:37:11.364644   58299 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:37:11.593022   58299 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:37:11.701769   58299 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:37:12.007156   58299 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:37:12.007629   58299 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:37:12.010092   58299 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:37:12.012490   58299 out.go:235]   - Booting up control plane ...
	I0916 10:37:12.012645   58299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:37:12.012798   58299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:37:12.012887   58299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:37:12.021074   58299 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:37:12.026362   58299 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:37:12.026435   58299 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:37:12.104718   58299 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:37:12.104865   58299 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:37:12.606352   58299 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.775519ms
	I0916 10:37:12.606466   58299 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:37:18.648021   58299 kubeadm.go:310] [api-check] The API server is healthy after 6.041604794s
	I0916 10:37:18.658951   58299 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:37:18.670867   58299 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:37:19.192445   58299 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:37:19.192661   58299 kubeadm.go:310] [mark-control-plane] Marking the node ha-107957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:37:19.200551   58299 kubeadm.go:310] [bootstrap-token] Using token: lf37vj.8fzapfwp2hty22qd
	I0916 10:37:19.201990   58299 out.go:235]   - Configuring RBAC rules ...
	I0916 10:37:19.202135   58299 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:37:19.207027   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:37:19.213328   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:37:19.215800   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:37:19.218422   58299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:37:19.220911   58299 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:37:19.230308   58299 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:37:19.476474   58299 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:37:20.054334   58299 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:37:20.055469   58299 kubeadm.go:310] 
	I0916 10:37:20.055582   58299 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:37:20.055603   58299 kubeadm.go:310] 
	I0916 10:37:20.055691   58299 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:37:20.055700   58299 kubeadm.go:310] 
	I0916 10:37:20.055744   58299 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:37:20.055814   58299 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:37:20.055905   58299 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:37:20.055919   58299 kubeadm.go:310] 
	I0916 10:37:20.055991   58299 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:37:20.056000   58299 kubeadm.go:310] 
	I0916 10:37:20.056063   58299 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:37:20.056072   58299 kubeadm.go:310] 
	I0916 10:37:20.056141   58299 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:37:20.056247   58299 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:37:20.056327   58299 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:37:20.056343   58299 kubeadm.go:310] 
	I0916 10:37:20.056429   58299 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:37:20.056498   58299 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:37:20.056504   58299 kubeadm.go:310] 
	I0916 10:37:20.056572   58299 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lf37vj.8fzapfwp2hty22qd \
	I0916 10:37:20.056673   58299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:37:20.056693   58299 kubeadm.go:310] 	--control-plane 
	I0916 10:37:20.056699   58299 kubeadm.go:310] 
	I0916 10:37:20.056835   58299 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:37:20.056854   58299 kubeadm.go:310] 
	I0916 10:37:20.056976   58299 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lf37vj.8fzapfwp2hty22qd \
	I0916 10:37:20.057144   58299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:37:20.059802   58299 kubeadm.go:310] W0916 10:37:09.436372    1317 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:37:20.060068   58299 kubeadm.go:310] W0916 10:37:09.436985    1317 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:37:20.060339   58299 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:37:20.060492   58299 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:37:20.060521   58299 cni.go:84] Creating CNI manager for ""
	I0916 10:37:20.060532   58299 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:37:20.062585   58299 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:37:20.063824   58299 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:37:20.067518   58299 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:37:20.067539   58299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:37:20.085065   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:37:20.278834   58299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:37:20.278925   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:20.278937   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-107957 minikube.k8s.io/updated_at=2024_09_16T10_37_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-107957 minikube.k8s.io/primary=true
	I0916 10:37:20.286426   58299 ops.go:34] apiserver oom_adj: -16
	I0916 10:37:20.346453   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:20.846570   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:21.347558   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:21.846632   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:22.346973   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:22.846611   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:23.347416   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:23.846503   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:24.347506   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:37:24.420953   58299 kubeadm.go:1113] duration metric: took 4.142098503s to wait for elevateKubeSystemPrivileges
	I0916 10:37:24.420986   58299 kubeadm.go:394] duration metric: took 15.138663112s to StartCluster
	I0916 10:37:24.421003   58299 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:24.421066   58299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:37:24.421733   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:24.421949   58299 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:24.421977   58299 start.go:241] waiting for startup goroutines ...
	I0916 10:37:24.421993   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:37:24.421992   58299 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:37:24.422083   58299 addons.go:69] Setting storage-provisioner=true in profile "ha-107957"
	I0916 10:37:24.422090   58299 addons.go:69] Setting default-storageclass=true in profile "ha-107957"
	I0916 10:37:24.422102   58299 addons.go:234] Setting addon storage-provisioner=true in "ha-107957"
	I0916 10:37:24.422114   58299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-107957"
	I0916 10:37:24.422129   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:24.422166   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:24.422486   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:24.422567   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:24.442732   58299 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:37:24.443096   58299 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:37:24.443827   58299 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:37:24.444138   58299 addons.go:234] Setting addon default-storageclass=true in "ha-107957"
	I0916 10:37:24.444184   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:24.444776   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:24.449256   58299 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:37:24.450714   58299 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:37:24.450735   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:37:24.450791   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:24.463468   58299 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:37:24.463492   58299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:37:24.463556   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:24.473766   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:24.481511   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:24.518044   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:37:24.715638   58299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:37:24.716533   58299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:37:25.000987   58299 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:37:25.254569   58299 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:37:25.254602   58299 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:37:25.254706   58299 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:37:25.254718   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:25.254728   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:25.254733   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:25.261525   58299 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:37:25.262050   58299 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:37:25.262066   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:25.262074   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:25.262077   58299 round_trippers.go:473]     Content-Type: application/json
	I0916 10:37:25.262080   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:25.264110   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:25.265876   58299 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:37:25.267125   58299 addons.go:510] duration metric: took 845.133033ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:37:25.267157   58299 start.go:246] waiting for cluster config update ...
	I0916 10:37:25.267168   58299 start.go:255] writing updated cluster config ...
	I0916 10:37:25.268737   58299 out.go:201] 
	I0916 10:37:25.270294   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:25.270354   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:37:25.272185   58299 out.go:177] * Starting "ha-107957-m02" control-plane node in "ha-107957" cluster
	I0916 10:37:25.273722   58299 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:37:25.275133   58299 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:37:25.277028   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:25.277054   58299 cache.go:56] Caching tarball of preloaded images
	I0916 10:37:25.277117   58299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:37:25.277157   58299 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:37:25.277167   58299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:37:25.277237   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:37:25.296591   58299 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:37:25.296612   58299 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:37:25.296699   58299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:37:25.296718   58299 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:37:25.296724   58299 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:37:25.296733   58299 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:37:25.296741   58299 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:37:25.297950   58299 image.go:273] response: 
	I0916 10:37:25.354963   58299 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:37:25.355008   58299 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:37:25.355043   58299 start.go:360] acquireMachinesLock for ha-107957-m02: {Name:mkbd1a70c826dc0de88173dfa3a4a79ea68a23fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:37:25.355135   58299 start.go:364] duration metric: took 74.612µs to acquireMachinesLock for "ha-107957-m02"
	I0916 10:37:25.355163   58299 start.go:93] Provisioning new machine with config: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:25.355270   58299 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:37:25.357344   58299 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:37:25.357465   58299 start.go:159] libmachine.API.Create for "ha-107957" (driver="docker")
	I0916 10:37:25.357493   58299 client.go:168] LocalClient.Create starting
	I0916 10:37:25.357554   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:37:25.357584   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:25.357600   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:25.357655   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:37:25.357674   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:25.357683   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:25.357862   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:37:25.376207   58299 network_create.go:77] Found existing network {name:ha-107957 subnet:0xc001891350 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:37:25.376259   58299 kic.go:121] calculated static IP "192.168.49.3" for the "ha-107957-m02" container
	I0916 10:37:25.376329   58299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:37:25.393281   58299 cli_runner.go:164] Run: docker volume create ha-107957-m02 --label name.minikube.sigs.k8s.io=ha-107957-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:37:25.411592   58299 oci.go:103] Successfully created a docker volume ha-107957-m02
	I0916 10:37:25.411675   58299 cli_runner.go:164] Run: docker run --rm --name ha-107957-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m02 --entrypoint /usr/bin/test -v ha-107957-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:37:26.040595   58299 oci.go:107] Successfully prepared a docker volume ha-107957-m02
	I0916 10:37:26.040631   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:37:26.040654   58299 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:37:26.040730   58299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:37:30.365019   58299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.324233407s)
	I0916 10:37:30.365053   58299 kic.go:203] duration metric: took 4.324395448s to extract preloaded images to volume ...
	W0916 10:37:30.365194   58299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:37:30.365304   58299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:37:30.412924   58299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-107957-m02 --name ha-107957-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-107957-m02 --network ha-107957 --ip 192.168.49.3 --volume ha-107957-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:37:30.712859   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Running}}
	I0916 10:37:30.730995   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:37:30.750011   58299 cli_runner.go:164] Run: docker exec ha-107957-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:37:30.792867   58299 oci.go:144] the created container "ha-107957-m02" has a running status.
	I0916 10:37:30.792893   58299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa...
	I0916 10:37:31.034298   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:37:31.034413   58299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:37:31.060538   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:37:31.078037   58299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:37:31.078058   58299 kic_runner.go:114] Args: [docker exec --privileged ha-107957-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:37:31.130198   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:37:31.150041   58299 machine.go:93] provisionDockerMachine start ...
	I0916 10:37:31.150128   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:31.167046   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:31.167267   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:31.167277   58299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:37:31.380753   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:37:31.380781   58299 ubuntu.go:169] provisioning hostname "ha-107957-m02"
	I0916 10:37:31.380828   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:31.399796   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:31.400018   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:31.400033   58299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m02 && echo "ha-107957-m02" | sudo tee /etc/hostname
	I0916 10:37:31.544605   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:37:31.544683   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:31.561265   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:31.561506   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:31.561532   58299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:37:31.693615   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:37:31.693666   58299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:37:31.693687   58299 ubuntu.go:177] setting up certificates
	I0916 10:37:31.693702   58299 provision.go:84] configureAuth start
	I0916 10:37:31.693762   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:37:31.709997   58299 provision.go:143] copyHostCerts
	I0916 10:37:31.710033   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:31.710060   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:37:31.710069   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:37:31.710136   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:37:31.710216   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:31.710233   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:37:31.710240   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:37:31.710263   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:37:31.710305   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:31.710321   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:37:31.710327   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:37:31.710346   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:37:31.710416   58299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m02 san=[127.0.0.1 192.168.49.3 ha-107957-m02 localhost minikube]
	I0916 10:37:32.282283   58299 provision.go:177] copyRemoteCerts
	I0916 10:37:32.282343   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:37:32.282376   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.299781   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:32.395184   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:37:32.395245   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:37:32.417432   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:37:32.417512   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:37:32.439205   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:37:32.439281   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:37:32.461698   58299 provision.go:87] duration metric: took 767.984839ms to configureAuth
	I0916 10:37:32.461725   58299 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:37:32.461884   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:32.461973   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.478213   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:37:32.478401   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:37:32.478417   58299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:37:32.702104   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:37:32.702135   58299 machine.go:96] duration metric: took 1.552075835s to provisionDockerMachine
	I0916 10:37:32.702145   58299 client.go:171] duration metric: took 7.344647339s to LocalClient.Create
	I0916 10:37:32.702162   58299 start.go:167] duration metric: took 7.344697738s to libmachine.API.Create "ha-107957"
	I0916 10:37:32.702168   58299 start.go:293] postStartSetup for "ha-107957-m02" (driver="docker")
	I0916 10:37:32.702178   58299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:37:32.702230   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:37:32.702266   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.719256   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:32.818926   58299 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:37:32.821997   58299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:37:32.822029   58299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:37:32.822037   58299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:37:32.822043   58299 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:37:32.822052   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:37:32.822116   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:37:32.822202   58299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:37:32.822214   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:37:32.822322   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:37:32.830386   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:32.853244   58299 start.go:296] duration metric: took 151.062688ms for postStartSetup
	I0916 10:37:32.853622   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:37:32.870415   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:37:32.870701   58299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:37:32.870743   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:32.887578   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:32.978119   58299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:37:32.982241   58299 start.go:128] duration metric: took 7.62695291s to createHost
	I0916 10:37:32.982274   58299 start.go:83] releasing machines lock for "ha-107957-m02", held for 7.627124916s
	I0916 10:37:32.982354   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:37:33.002043   58299 out.go:177] * Found network options:
	I0916 10:37:33.003837   58299 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:37:33.005528   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:37:33.005577   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:37:33.005656   58299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:37:33.005706   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:33.005719   58299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:37:33.005766   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:37:33.022990   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:33.023413   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:37:33.261178   58299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:37:33.265498   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:33.283301   58299 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:37:33.283380   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:37:33.311542   58299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:37:33.311568   58299 start.go:495] detecting cgroup driver to use...
	I0916 10:37:33.311597   58299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:37:33.311665   58299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:37:33.325590   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:37:33.336328   58299 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:37:33.336378   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:37:33.348934   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:37:33.362149   58299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:37:33.436372   58299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:37:33.516403   58299 docker.go:233] disabling docker service ...
	I0916 10:37:33.516466   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:37:33.534110   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:37:33.545090   58299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:37:33.618580   58299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:37:33.699969   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:37:33.711041   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:37:33.725983   58299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:37:33.726037   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.735506   58299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:37:33.735567   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.744790   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.754076   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.763393   58299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:37:33.771975   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.780729   58299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.794841   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:37:33.803921   58299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:37:33.812615   58299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:37:33.820773   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:33.901372   58299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:37:34.010086   58299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:37:34.010146   58299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:37:34.013617   58299 start.go:563] Will wait 60s for crictl version
	I0916 10:37:34.013673   58299 ssh_runner.go:195] Run: which crictl
	I0916 10:37:34.016752   58299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:37:34.049238   58299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:37:34.049315   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:34.081490   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:37:34.117067   58299 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:37:34.118543   58299 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:37:34.120114   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:37:34.137233   58299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:37:34.140814   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:34.151343   58299 mustload.go:65] Loading cluster: ha-107957
	I0916 10:37:34.151521   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:34.151737   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:37:34.168300   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:34.168549   58299 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.3
	I0916 10:37:34.168559   58299 certs.go:194] generating shared ca certs ...
	I0916 10:37:34.168572   58299 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:34.168722   58299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:37:34.168773   58299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:37:34.168783   58299 certs.go:256] generating profile certs ...
	I0916 10:37:34.168859   58299 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:37:34.168884   58299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b
	I0916 10:37:34.168899   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 10:37:34.301229   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b ...
	I0916 10:37:34.301258   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b: {Name:mk774b827afeed5d627c66ef74c7608e9a851512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:34.301452   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b ...
	I0916 10:37:34.301469   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b: {Name:mk992bd5f4fa93f43a7256d7e5350f32ffad3267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:34.301547   58299 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.f59b195b -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:37:34.301678   58299 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:37:34.301801   58299 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:37:34.301818   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:37:34.301839   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:37:34.301852   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:37:34.301865   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:37:34.301879   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:37:34.301891   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:37:34.301902   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:37:34.301914   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:37:34.301962   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:37:34.301992   58299 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:37:34.302001   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:37:34.302023   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:37:34.302046   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:37:34.302066   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:37:34.302102   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:37:34.302127   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.302144   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.302161   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.302225   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:34.318470   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:34.405685   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:37:34.409645   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:37:34.421144   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:37:34.424272   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:37:34.436239   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:37:34.439711   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:37:34.451270   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:37:34.454457   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:37:34.465684   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:37:34.468808   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:37:34.479927   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:37:34.483274   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:37:34.494765   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:37:34.518944   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:37:34.540774   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:37:34.562951   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:37:34.585188   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 10:37:34.607554   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:37:34.630026   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:37:34.652393   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:37:34.674836   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:37:34.697941   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:37:34.720114   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:37:34.742041   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:37:34.758526   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:37:34.774581   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:37:34.791700   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:37:34.807947   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:37:34.824874   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:37:34.841359   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:37:34.858359   58299 ssh_runner.go:195] Run: openssl version
	I0916 10:37:34.863194   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:37:34.871960   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.875277   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.875384   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:37:34.881694   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:37:34.890738   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:37:34.899773   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.902995   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.903050   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:37:34.909715   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:37:34.918851   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:37:34.927848   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.931537   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.931593   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:37:34.938226   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:37:34.947489   58299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:37:34.950710   58299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:37:34.950756   58299 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0916 10:37:34.950844   58299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:37:34.950873   58299 kube-vip.go:115] generating kube-vip config ...
	I0916 10:37:34.950904   58299 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:37:34.961898   58299 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:37:34.961973   58299 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:37:34.962022   58299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:37:34.970528   58299 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:37:34.970590   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:37:34.978912   58299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:37:34.995923   58299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:37:35.012920   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:37:35.029471   58299 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:37:35.032620   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:37:35.042418   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:35.119733   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:37:35.133407   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:37:35.133649   58299 start.go:317] joinCluster: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:37:35.133739   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:37:35.133789   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:37:35.154278   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:37:35.298604   58299 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:35.298644   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5wd6mt.whossothqn01zo81 --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 10:37:39.016515   58299 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5wd6mt.whossothqn01zo81 --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (3.717843693s)
	I0916 10:37:39.016585   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:37:39.911856   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-107957-m02 minikube.k8s.io/updated_at=2024_09_16T10_37_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-107957 minikube.k8s.io/primary=false
	I0916 10:37:40.021485   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-107957-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:37:40.120388   58299 start.go:319] duration metric: took 4.986732728s to joinCluster
	I0916 10:37:40.120458   58299 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:37:40.120871   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:37:40.122048   58299 out.go:177] * Verifying Kubernetes components...
	I0916 10:37:40.124605   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:37:40.710636   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:37:40.800465   58299 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:37:40.800815   58299 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:37:40.800901   58299 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:37:40.801246   58299 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:37:40.801420   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:40.801432   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:40.801440   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:40.801445   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:40.811587   58299 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:37:41.302301   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:41.302327   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:41.302339   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:41.302344   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:41.306308   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:37:41.802181   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:41.802204   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:41.802214   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:41.802219   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:41.804886   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:42.301662   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:42.301685   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:42.301693   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:42.301698   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:42.304381   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:42.802328   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:42.802353   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:42.802364   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:42.802372   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:42.807328   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:37:42.807852   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:43.302198   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:43.302219   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:43.302226   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:43.302230   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:43.305035   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:43.801527   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:43.801552   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:43.801564   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:43.801571   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:43.804080   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:44.301987   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:44.302007   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:44.302013   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:44.302017   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:44.304534   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:44.801872   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:44.801893   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:44.801903   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:44.801910   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:44.804820   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:45.301535   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:45.301555   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:45.301563   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:45.301567   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:45.304170   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:45.306018   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:45.801549   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:45.801571   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:45.801578   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:45.801582   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:45.804831   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:37:46.301516   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:46.301536   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:46.301543   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:46.301547   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:46.304387   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:46.801814   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:46.801838   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:46.801847   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:46.801851   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:46.804308   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:47.301754   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:47.301779   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:47.301787   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:47.301791   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:47.304275   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:47.802061   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:47.802082   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:47.802090   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:47.802094   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:47.804821   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:47.805278   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:48.301505   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:48.301525   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:48.301533   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:48.301537   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:48.304326   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:48.802241   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:48.802263   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:48.802274   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:48.802281   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:48.805084   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:49.301592   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:49.301621   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:49.301633   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:49.301639   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:49.303956   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:49.802191   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:49.802227   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:49.802234   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:49.802239   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:49.804941   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:49.805629   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:50.301543   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:50.301585   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:50.301594   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:50.301600   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:50.304001   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:50.801527   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:50.801547   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:50.801555   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:50.801559   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:50.804309   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:51.302366   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:51.302390   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:51.302401   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:51.302408   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:51.304894   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:51.801513   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:51.801535   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:51.801545   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:51.801553   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:51.804240   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:52.302137   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:52.302163   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:52.302173   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:52.302179   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:52.304846   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:52.305481   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:52.801547   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:52.801576   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:52.801589   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:52.801595   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:52.804369   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:53.302308   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:53.302328   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:53.302335   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:53.302339   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:53.305223   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:53.801831   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:53.801897   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:53.801910   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:53.801915   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:53.804482   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:54.302458   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:54.302481   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:54.302489   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:54.302495   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:54.305238   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:54.305894   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:54.801464   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:54.801484   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:54.801491   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:54.801496   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:54.804214   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:55.301815   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:55.301838   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:55.301845   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:55.301850   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:55.304496   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:55.802365   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:55.802385   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:55.802393   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:55.802398   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:55.805290   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:56.302157   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:56.302178   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:56.302186   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:56.302189   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:56.304850   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:56.801532   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:56.801553   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:56.801561   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:56.801565   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:56.804488   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:56.805160   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:57.302416   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:57.302436   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:57.302444   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:57.302447   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:57.305363   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:57.802288   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:57.802321   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:57.802333   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:57.802341   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:57.811723   58299 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 10:37:58.302067   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:58.302089   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:58.302098   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:58.302100   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:58.304659   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:58.801524   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:58.801544   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:58.801551   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:58.801557   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:58.804234   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:59.302139   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:59.302158   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:59.302166   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:59.302169   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:59.304804   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:37:59.305320   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:37:59.802270   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:37:59.802295   58299 round_trippers.go:469] Request Headers:
	I0916 10:37:59.802309   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:37:59.802313   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:37:59.804903   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:00.301734   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:00.301757   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:00.301765   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:00.301769   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:00.304628   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:00.801514   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:00.801535   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:00.801543   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:00.801546   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:00.804443   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:01.302374   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:01.302397   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:01.302412   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:01.302415   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:01.305170   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:01.305665   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:01.802045   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:01.802066   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:01.802074   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:01.802079   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:01.804686   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:02.301476   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:02.301496   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:02.301504   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:02.301508   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:02.304452   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:02.802138   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:02.802166   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:02.802174   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:02.802177   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:02.804937   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:03.301509   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:03.301531   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:03.301547   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:03.301567   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:03.304473   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:03.802309   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:03.802379   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:03.802392   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:03.802400   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:03.804934   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:03.805395   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:04.301506   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:04.301529   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:04.301540   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:04.301546   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:04.304289   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:04.801524   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:04.801547   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:04.801555   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:04.801559   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:04.804452   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:05.302041   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:05.302067   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:05.302075   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:05.302079   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:05.304793   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:05.801515   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:05.801537   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:05.801545   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:05.801550   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:05.804379   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:06.302227   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:06.302252   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:06.302261   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:06.302267   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:06.305289   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:06.305885   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:06.802185   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:06.802208   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:06.802216   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:06.802219   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:06.804966   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:07.301478   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:07.301498   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:07.301506   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:07.301510   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:07.304142   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:07.802119   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:07.802144   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:07.802154   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:07.802160   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:07.804835   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:08.301551   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:08.301571   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:08.301582   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:08.301587   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:08.304309   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:08.802410   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:08.802431   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:08.802441   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:08.802454   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:08.805162   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:08.805628   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:09.302238   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:09.302262   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:09.302274   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:09.302280   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:09.304866   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:09.802217   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:09.802240   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:09.802248   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:09.802252   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:09.804934   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:10.301534   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:10.301558   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:10.301570   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:10.301576   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:10.304330   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:10.802225   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:10.802247   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:10.802255   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:10.802260   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:10.804948   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:11.301530   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:11.301552   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:11.301566   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:11.301571   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:11.304365   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:11.304844   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:11.802203   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:11.802230   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:11.802240   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:11.802247   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:11.805188   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:12.302155   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:12.302178   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:12.302188   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:12.302193   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:12.304924   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:12.801529   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:12.801549   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:12.801555   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:12.801558   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:12.804066   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:13.301526   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:13.301546   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:13.301554   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:13.301559   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:13.304301   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:13.304921   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:13.801876   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:13.801897   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:13.801908   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:13.801913   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:13.804574   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:14.302479   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:14.302500   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:14.302508   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:14.302512   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:14.305106   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:14.802395   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:14.802416   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:14.802424   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:14.802428   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:14.805141   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:15.301516   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:15.301537   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:15.301545   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:15.301549   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:15.304261   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:15.801743   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:15.801777   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:15.801785   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:15.801788   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:15.804637   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:15.805139   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:16.301470   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:16.301496   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:16.301503   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:16.301507   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:16.304238   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:16.802170   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:16.802193   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:16.802200   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:16.802204   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:16.804626   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:17.302462   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:17.302488   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:17.302502   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:17.302508   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:17.305592   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:17.802472   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:17.802493   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:17.802501   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:17.802506   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:17.805055   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:17.805544   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:18.301522   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:18.301541   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:18.301550   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:18.301555   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:18.304290   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:18.802051   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:18.802090   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:18.802099   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:18.802103   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:18.805022   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:19.301527   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:19.301548   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:19.301556   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:19.301561   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:19.304219   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:19.802426   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:19.802447   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:19.802454   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:19.802461   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:19.805114   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:19.805765   58299 node_ready.go:53] node "ha-107957-m02" has status "Ready":"False"
	I0916 10:38:20.301502   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:20.301544   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.301553   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.301557   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.304392   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.802427   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:20.802454   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.802467   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.802475   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.805184   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.805685   58299 node_ready.go:49] node "ha-107957-m02" has status "Ready":"True"
	I0916 10:38:20.805707   58299 node_ready.go:38] duration metric: took 40.004435194s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:38:20.805739   58299 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:38:20.805837   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:20.805853   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.805862   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.805869   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.809565   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:20.815076   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.815153   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:38:20.815163   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.815170   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.815173   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.817483   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.818169   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:20.818186   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.818196   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.818200   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.820284   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.820794   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.820810   58299 pod_ready.go:82] duration metric: took 5.712221ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.820819   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.820876   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:38:20.820883   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.820890   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.820894   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.823188   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.823919   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:20.823936   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.823944   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.823948   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.826129   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.826616   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.826635   58299 pod_ready.go:82] duration metric: took 5.808507ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.826644   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.826696   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:38:20.826704   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.826711   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.826717   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.830219   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:20.830919   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:20.830938   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.830949   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.830953   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.834909   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:20.835471   58299 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.835493   58299 pod_ready.go:82] duration metric: took 8.841297ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.835506   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.835573   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:38:20.835585   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.835594   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.835603   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.837675   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:20.838355   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:20.838372   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:20.838382   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:20.838388   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:20.840341   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:38:20.840760   58299 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:20.840777   58299 pod_ready.go:82] duration metric: took 5.263219ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:20.840795   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.003172   58299 request.go:632] Waited for 162.309743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:38:21.003259   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:38:21.003269   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.003277   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.003280   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.006190   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.203208   58299 request.go:632] Waited for 196.385519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:21.203296   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:21.203304   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.203318   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.203330   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.206174   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.206680   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:21.206700   58299 pod_ready.go:82] duration metric: took 365.897277ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.206710   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.402762   58299 request.go:632] Waited for 195.962152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:38:21.402841   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:38:21.402857   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.402872   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.402881   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.405784   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.602767   58299 request.go:632] Waited for 196.360303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:21.602845   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:21.602854   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.602862   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.602870   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.605413   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:21.605893   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:21.605911   58299 pod_ready.go:82] duration metric: took 399.19447ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.605921   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:21.803002   58299 request.go:632] Waited for 197.006404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:38:21.803053   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:38:21.803058   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:21.803065   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:21.803073   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:21.805695   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.002864   58299 request.go:632] Waited for 196.382399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.002937   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.002944   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.002957   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.002968   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.005777   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.006329   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:22.006351   58299 pod_ready.go:82] duration metric: took 400.424868ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.006367   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.203384   58299 request.go:632] Waited for 196.945751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:38:22.203460   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:38:22.203465   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.203477   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.203484   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.206358   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.403327   58299 request.go:632] Waited for 196.250326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:22.403377   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:22.403382   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.403390   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.403394   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.406247   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.406759   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:22.406778   58299 pod_ready.go:82] duration metric: took 400.403552ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.406788   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.602937   58299 request.go:632] Waited for 196.085399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:38:22.603008   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:38:22.603015   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.603022   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.603030   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.606012   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.803074   58299 request.go:632] Waited for 196.341486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.803138   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:22.803149   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:22.803157   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:22.803162   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:22.805681   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:22.806114   58299 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:22.806132   58299 pod_ready.go:82] duration metric: took 399.337302ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:22.806144   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.003288   58299 request.go:632] Waited for 197.070192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:38:23.003363   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:38:23.003368   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.003375   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.003380   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.006475   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:23.202403   58299 request.go:632] Waited for 195.314585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:23.202476   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:23.202484   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.202493   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.202500   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.205303   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:23.205798   58299 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:23.205818   58299 pod_ready.go:82] duration metric: took 399.666408ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.205831   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.402948   58299 request.go:632] Waited for 197.03533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:38:23.403030   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:38:23.403035   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.403043   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.403049   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.405757   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:23.602642   58299 request.go:632] Waited for 196.301879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:23.602734   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:38:23.602747   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.602755   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.602759   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.605540   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:23.606064   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:23.606083   58299 pod_ready.go:82] duration metric: took 400.245071ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.606093   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:23.803170   58299 request.go:632] Waited for 197.002757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:38:23.803255   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:38:23.803265   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:23.803273   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:23.803326   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:23.806066   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:24.002961   58299 request.go:632] Waited for 196.361904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:24.003034   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:38:24.003046   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.003064   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.003090   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.005862   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:24.006294   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:38:24.006311   58299 pod_ready.go:82] duration metric: took 400.210196ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:38:24.006321   58299 pod_ready.go:39] duration metric: took 3.200563132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:38:24.006335   58299 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:38:24.006403   58299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:38:24.017393   58299 api_server.go:72] duration metric: took 43.896903419s to wait for apiserver process to appear ...
	I0916 10:38:24.017419   58299 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:38:24.017444   58299 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:38:24.022347   58299 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:38:24.022418   58299 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:38:24.022426   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.022434   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.022438   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.023255   58299 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:38:24.023378   58299 api_server.go:141] control plane version: v1.31.1
	I0916 10:38:24.023396   58299 api_server.go:131] duration metric: took 5.971353ms to wait for apiserver health ...
	I0916 10:38:24.023404   58299 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:38:24.202861   58299 request.go:632] Waited for 179.38976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.202958   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.202970   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.202979   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.202987   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.207117   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:24.211158   58299 system_pods.go:59] 17 kube-system pods found
	I0916 10:38:24.211185   58299 system_pods.go:61] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:38:24.211190   58299 system_pods.go:61] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:38:24.211194   58299 system_pods.go:61] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:38:24.211198   58299 system_pods.go:61] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:38:24.211201   58299 system_pods.go:61] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:38:24.211204   58299 system_pods.go:61] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:38:24.211207   58299 system_pods.go:61] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:38:24.211210   58299 system_pods.go:61] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:38:24.211213   58299 system_pods.go:61] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:38:24.211216   58299 system_pods.go:61] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:38:24.211220   58299 system_pods.go:61] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:38:24.211223   58299 system_pods.go:61] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:38:24.211226   58299 system_pods.go:61] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:38:24.211229   58299 system_pods.go:61] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:38:24.211231   58299 system_pods.go:61] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:38:24.211234   58299 system_pods.go:61] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:38:24.211236   58299 system_pods.go:61] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:38:24.211244   58299 system_pods.go:74] duration metric: took 187.832357ms to wait for pod list to return data ...
	I0916 10:38:24.211254   58299 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:38:24.402614   58299 request.go:632] Waited for 191.282955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:38:24.402708   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:38:24.402722   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.402731   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.402741   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.405729   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:24.405961   58299 default_sa.go:45] found service account: "default"
	I0916 10:38:24.405980   58299 default_sa.go:55] duration metric: took 194.718283ms for default service account to be created ...
	I0916 10:38:24.405991   58299 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:38:24.603485   58299 request.go:632] Waited for 197.425301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.603565   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:38:24.603574   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.603591   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.603746   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.608223   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:24.612176   58299 system_pods.go:86] 17 kube-system pods found
	I0916 10:38:24.612232   58299 system_pods.go:89] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:38:24.612245   58299 system_pods.go:89] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:38:24.612255   58299 system_pods.go:89] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:38:24.612260   58299 system_pods.go:89] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:38:24.612266   58299 system_pods.go:89] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:38:24.612270   58299 system_pods.go:89] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:38:24.612274   58299 system_pods.go:89] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:38:24.612283   58299 system_pods.go:89] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:38:24.612287   58299 system_pods.go:89] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:38:24.612293   58299 system_pods.go:89] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:38:24.612297   58299 system_pods.go:89] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:38:24.612301   58299 system_pods.go:89] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:38:24.612304   58299 system_pods.go:89] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:38:24.612310   58299 system_pods.go:89] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:38:24.612314   58299 system_pods.go:89] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:38:24.612319   58299 system_pods.go:89] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:38:24.612326   58299 system_pods.go:89] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:38:24.612332   58299 system_pods.go:126] duration metric: took 206.336369ms to wait for k8s-apps to be running ...
	I0916 10:38:24.612341   58299 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:38:24.612385   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:38:24.624271   58299 system_svc.go:56] duration metric: took 11.92066ms WaitForService to wait for kubelet
	I0916 10:38:24.624302   58299 kubeadm.go:582] duration metric: took 44.503819786s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:38:24.624328   58299 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:38:24.802750   58299 request.go:632] Waited for 178.34473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:38:24.802803   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:38:24.802807   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:24.802815   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:24.802819   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:24.805865   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:24.806570   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:38:24.806594   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:38:24.806613   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:38:24.806617   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:38:24.806621   58299 node_conditions.go:105] duration metric: took 182.289173ms to run NodePressure ...
	I0916 10:38:24.806634   58299 start.go:241] waiting for startup goroutines ...
	I0916 10:38:24.806659   58299 start.go:255] writing updated cluster config ...
	I0916 10:38:24.808791   58299 out.go:201] 
	I0916 10:38:24.810381   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:24.810473   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:38:24.812240   58299 out.go:177] * Starting "ha-107957-m03" control-plane node in "ha-107957" cluster
	I0916 10:38:24.814284   58299 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:38:24.815912   58299 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:38:24.817396   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:24.817420   58299 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:24.817492   58299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:38:24.817547   58299 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:24.817589   58299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:24.817732   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:38:24.837951   58299 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:38:24.837969   58299 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:38:24.838043   58299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:38:24.838061   58299 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:38:24.838066   58299 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:38:24.838073   58299 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:38:24.838083   58299 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:38:24.841619   58299 image.go:273] response: 
	I0916 10:38:24.915250   58299 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:38:24.915289   58299 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:38:24.915323   58299 start.go:360] acquireMachinesLock for ha-107957-m03: {Name:mk0f035d5dad9998d086b052d83625d4474d070c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:24.915460   58299 start.go:364] duration metric: took 112.213µs to acquireMachinesLock for "ha-107957-m03"
	I0916 10:38:24.915490   58299 start.go:93] Provisioning new machine with config: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:24.915652   58299 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 10:38:24.918037   58299 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:38:24.918168   58299 start.go:159] libmachine.API.Create for "ha-107957" (driver="docker")
	I0916 10:38:24.918198   58299 client.go:168] LocalClient.Create starting
	I0916 10:38:24.918280   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:38:24.918316   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:24.918336   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:24.918402   58299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:38:24.918429   58299 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:24.918446   58299 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:24.918718   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:38:24.937284   58299 network_create.go:77] Found existing network {name:ha-107957 subnet:0xc001c42ff0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:38:24.937324   58299 kic.go:121] calculated static IP "192.168.49.4" for the "ha-107957-m03" container
	I0916 10:38:24.937431   58299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:38:24.955937   58299 cli_runner.go:164] Run: docker volume create ha-107957-m03 --label name.minikube.sigs.k8s.io=ha-107957-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:38:24.974526   58299 oci.go:103] Successfully created a docker volume ha-107957-m03
	I0916 10:38:24.974625   58299 cli_runner.go:164] Run: docker run --rm --name ha-107957-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m03 --entrypoint /usr/bin/test -v ha-107957-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:38:25.480664   58299 oci.go:107] Successfully prepared a docker volume ha-107957-m03
	I0916 10:38:25.480707   58299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:25.480730   58299 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:38:25.480804   58299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:38:29.894616   58299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-107957-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413762946s)
	I0916 10:38:29.894655   58299 kic.go:203] duration metric: took 4.413918091s to extract preloaded images to volume ...
	W0916 10:38:29.894789   58299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:38:29.894879   58299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:38:29.944523   58299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-107957-m03 --name ha-107957-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-107957-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-107957-m03 --network ha-107957 --ip 192.168.49.4 --volume ha-107957-m03:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:38:30.226366   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Running}}
	I0916 10:38:30.246741   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:38:30.264758   58299 cli_runner.go:164] Run: docker exec ha-107957-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:38:30.307422   58299 oci.go:144] the created container "ha-107957-m03" has a running status.
	I0916 10:38:30.307452   58299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa...
	I0916 10:38:30.466012   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:38:30.466061   58299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:38:30.490382   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:38:30.509012   58299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:38:30.509040   58299 kic_runner.go:114] Args: [docker exec --privileged ha-107957-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:38:30.558367   58299 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:38:30.577094   58299 machine.go:93] provisionDockerMachine start ...
	I0916 10:38:30.577189   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:30.602673   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:30.602963   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:30.602979   58299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:38:30.603835   58299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37776->127.0.0.1:32793: read: connection reset by peer
	I0916 10:38:33.737131   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m03
	
	I0916 10:38:33.737162   58299 ubuntu.go:169] provisioning hostname "ha-107957-m03"
	I0916 10:38:33.737217   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:33.754194   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:33.754364   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:33.754377   58299 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m03 && echo "ha-107957-m03" | sudo tee /etc/hostname
	I0916 10:38:33.900681   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m03
	
	I0916 10:38:33.900767   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:33.918561   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:33.918794   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:33.918823   58299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:38:34.049313   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:34.049368   58299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:38:34.049395   58299 ubuntu.go:177] setting up certificates
	I0916 10:38:34.049408   58299 provision.go:84] configureAuth start
	I0916 10:38:34.049488   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:38:34.065664   58299 provision.go:143] copyHostCerts
	I0916 10:38:34.065709   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:38:34.065741   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:38:34.065754   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:38:34.065828   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:38:34.065923   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:38:34.065950   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:38:34.065960   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:38:34.065997   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:38:34.066054   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:38:34.066078   58299 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:38:34.066087   58299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:38:34.066122   58299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:38:34.066189   58299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m03 san=[127.0.0.1 192.168.49.4 ha-107957-m03 localhost minikube]
	I0916 10:38:34.276571   58299 provision.go:177] copyRemoteCerts
	I0916 10:38:34.276624   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:38:34.276656   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.293215   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:34.386186   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:38:34.386268   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:38:34.409325   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:38:34.409403   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:38:34.432158   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:38:34.432213   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:38:34.454766   58299 provision.go:87] duration metric: took 405.337346ms to configureAuth
	I0916 10:38:34.454791   58299 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:38:34.455029   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:34.455144   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.471918   58299 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:34.472102   58299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:38:34.472121   58299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:38:34.694736   58299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:38:34.694764   58299 machine.go:96] duration metric: took 4.117643787s to provisionDockerMachine
	I0916 10:38:34.694775   58299 client.go:171] duration metric: took 9.776568912s to LocalClient.Create
	I0916 10:38:34.694792   58299 start.go:167] duration metric: took 9.77662729s to libmachine.API.Create "ha-107957"
	I0916 10:38:34.694799   58299 start.go:293] postStartSetup for "ha-107957-m03" (driver="docker")
	I0916 10:38:34.694811   58299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:38:34.694880   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:38:34.694929   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.712379   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:34.806418   58299 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:38:34.809963   58299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:38:34.809996   58299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:38:34.810004   58299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:38:34.810011   58299 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:38:34.810020   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:38:34.810074   58299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:38:34.810142   58299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:38:34.810151   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:38:34.810231   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:38:34.818424   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:38:34.842357   58299 start.go:296] duration metric: took 147.542838ms for postStartSetup
	I0916 10:38:34.842746   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:38:34.859806   58299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:38:34.860057   58299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:38:34.860097   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.876488   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:34.970095   58299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:38:34.974100   58299 start.go:128] duration metric: took 10.058431856s to createHost
	I0916 10:38:34.974126   58299 start.go:83] releasing machines lock for "ha-107957-m03", held for 10.058651431s
	I0916 10:38:34.974186   58299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:38:34.993465   58299 out.go:177] * Found network options:
	I0916 10:38:34.994925   58299 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:38:34.996440   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:38:34.996464   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:38:34.996485   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:38:34.996496   58299 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:38:34.996563   58299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:38:34.996595   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:34.996639   58299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:38:34.996708   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:38:35.015457   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:35.015686   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:38:35.245067   58299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:38:35.249634   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:38:35.267233   58299 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:38:35.267298   58299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:38:35.294721   58299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:38:35.294744   58299 start.go:495] detecting cgroup driver to use...
	I0916 10:38:35.294776   58299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:38:35.294817   58299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:38:35.308988   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:38:35.320707   58299 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:38:35.320756   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:38:35.334091   58299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:38:35.347248   58299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:38:35.423897   58299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:38:35.508610   58299 docker.go:233] disabling docker service ...
	I0916 10:38:35.508681   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:38:35.527435   58299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:38:35.539623   58299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:38:35.615361   58299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:38:35.705579   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:38:35.716556   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:38:35.732322   58299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:38:35.732390   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.742382   58299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:38:35.742444   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.752000   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.761540   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.770919   58299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:38:35.779702   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.789271   58299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.804581   58299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:35.814492   58299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:38:35.822427   58299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:38:35.830863   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:35.894987   58299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:38:35.994767   58299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:38:35.994837   58299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:38:35.998643   58299 start.go:563] Will wait 60s for crictl version
	I0916 10:38:35.998710   58299 ssh_runner.go:195] Run: which crictl
	I0916 10:38:36.002002   58299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:38:36.033661   58299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:38:36.033739   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:38:36.066846   58299 ssh_runner.go:195] Run: crio --version
	I0916 10:38:36.103997   58299 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:38:36.105552   58299 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:38:36.107025   58299 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:38:36.108392   58299 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:38:36.124868   58299 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:38:36.128276   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:36.138504   58299 mustload.go:65] Loading cluster: ha-107957
	I0916 10:38:36.138756   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:36.139027   58299 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:38:36.156422   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:38:36.156692   58299 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.4
	I0916 10:38:36.156706   58299 certs.go:194] generating shared ca certs ...
	I0916 10:38:36.156718   58299 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:36.156856   58299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:38:36.156919   58299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:38:36.156933   58299 certs.go:256] generating profile certs ...
	I0916 10:38:36.157042   58299 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:38:36.157079   58299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518
	I0916 10:38:36.157099   58299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 10:38:36.471351   58299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518 ...
	I0916 10:38:36.471379   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518: {Name:mk86ec6e4db4e3ee25dab34a66ccccc54b2fa772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:36.471548   58299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518 ...
	I0916 10:38:36.471560   58299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518: {Name:mk7f635af130dc443af1fb5996a9a27aeb6677f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:36.471631   58299 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.d4dae518 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:38:36.471764   58299 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:38:36.471890   58299 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:38:36.471905   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:38:36.471918   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:38:36.471928   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:38:36.471985   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:38:36.472003   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:38:36.472013   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:38:36.472022   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:38:36.472031   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:38:36.472077   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:38:36.472106   58299 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:38:36.472120   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:38:36.472152   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:38:36.472181   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:38:36.472218   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:38:36.472272   58299 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:38:36.472312   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:36.472334   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:38:36.472353   58299 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:38:36.472413   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:38:36.489765   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:38:36.589733   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:38:36.593224   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:38:36.604469   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:38:36.607605   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:38:36.618797   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:38:36.621830   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:38:36.633365   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:38:36.636572   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:38:36.647677   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:38:36.650797   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:38:36.663187   58299 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:38:36.666395   58299 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:38:36.678179   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:38:36.703649   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:38:36.729425   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:38:36.757180   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:38:36.782176   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 10:38:36.804453   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:38:36.826154   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:38:36.849179   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:38:36.872612   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:38:36.895194   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:38:36.917519   58299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:38:36.940358   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:38:36.956876   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:38:36.973133   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:38:36.989169   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:38:37.005196   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:38:37.021420   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:38:37.036917   58299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:38:37.052659   58299 ssh_runner.go:195] Run: openssl version
	I0916 10:38:37.057616   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:38:37.065915   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:38:37.068939   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:38:37.068983   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:38:37.075024   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:38:37.083561   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:38:37.091935   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:37.095084   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:37.095131   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:37.101373   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:38:37.109566   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:38:37.118196   58299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:38:37.121300   58299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:38:37.121381   58299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:38:37.127557   58299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:38:37.136439   58299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:38:37.139455   58299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:38:37.139509   58299 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.1 crio true true} ...
	I0916 10:38:37.139614   58299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:38:37.139646   58299 kube-vip.go:115] generating kube-vip config ...
	I0916 10:38:37.139685   58299 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:38:37.151097   58299 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:38:37.151174   58299 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:38:37.151227   58299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:38:37.159168   58299 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:38:37.159225   58299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:38:37.167003   58299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:38:37.182637   58299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:38:37.199093   58299 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:38:37.215506   58299 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:38:37.219086   58299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:37.229046   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:37.307091   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:38:37.319732   58299 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:38:37.320004   58299 start.go:317] joinCluster: &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:37.320166   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:38:37.320215   58299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:38:37.338002   58299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:38:37.478150   58299 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:37.478202   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mokpad.4jldtvkjjjsar6qe --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 10:38:41.717081   58299 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mokpad.4jldtvkjjjsar6qe --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-107957-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (4.238853787s)
	I0916 10:38:41.717123   58299 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:38:42.603720   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-107957-m03 minikube.k8s.io/updated_at=2024_09_16T10_38_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-107957 minikube.k8s.io/primary=false
	I0916 10:38:42.697412   58299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-107957-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:38:42.789833   58299 start.go:319] duration metric: took 5.469823264s to joinCluster
	I0916 10:38:42.789915   58299 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:42.790199   58299 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:42.791936   58299 out.go:177] * Verifying Kubernetes components...
	I0916 10:38:42.793445   58299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:43.204937   58299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:38:43.222289   58299 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:38:43.222631   58299 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:38:43.222711   58299 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:38:43.222966   58299 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m03" to be "Ready" ...
	I0916 10:38:43.223047   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:43.223056   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:43.223067   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:43.223075   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:43.226173   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:43.724068   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:43.724092   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:43.724100   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:43.724104   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:43.726808   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:44.223643   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:44.223663   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:44.223671   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:44.223675   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:44.226814   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:44.723990   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:44.724010   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:44.724019   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:44.724024   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:44.726833   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:45.223744   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:45.223765   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:45.223775   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:45.223780   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:45.226384   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:45.226829   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:45.723144   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:45.723164   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:45.723172   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:45.723177   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:45.725902   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:46.223799   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:46.223817   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:46.223826   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:46.223830   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:46.226521   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:46.723387   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:46.723412   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:46.723424   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:46.723429   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:46.726256   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:47.223145   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:47.223163   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:47.223173   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:47.223180   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:47.225957   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:47.723714   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:47.723744   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:47.723801   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:47.723810   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:47.726372   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:47.726834   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:48.223868   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:48.223890   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:48.223899   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:48.223905   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:48.226363   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:48.723209   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:48.723232   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:48.723240   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:48.723244   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:48.726007   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:49.223841   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:49.223860   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:49.223867   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:49.223873   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:49.226386   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:49.723552   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:49.723576   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:49.723584   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:49.723588   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:49.726465   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:49.728853   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:50.223252   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:50.223279   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:50.223287   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:50.223291   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:50.226029   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:50.723919   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:50.723941   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:50.723951   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:50.723958   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:50.726487   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:51.223373   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:51.223392   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:51.223400   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:51.223404   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:51.226038   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:51.723491   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:51.723516   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:51.723526   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:51.723530   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:51.726404   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:52.223302   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:52.223322   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:52.223330   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:52.223333   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:52.225843   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:52.226264   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:52.723624   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:52.723644   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:52.723652   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:52.723657   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:52.726430   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:53.223234   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:53.223253   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:53.223260   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:53.223265   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:53.225920   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:53.723831   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:53.723851   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:53.723860   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:53.723863   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:53.726703   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:54.224088   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:54.224107   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:54.224115   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:54.224118   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:54.226780   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:54.227361   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:54.723999   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:54.724019   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:54.724027   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:54.724036   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:54.728305   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:55.223232   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:55.223257   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:55.223265   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:55.223269   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:55.225822   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:55.724026   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:55.724049   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:55.724057   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:55.724062   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:55.727072   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:56.223877   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:56.223898   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:56.223912   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:56.223916   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:56.226509   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:56.723365   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:56.723387   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:56.723395   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:56.723399   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:56.726446   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:38:56.727019   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:57.223210   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:57.223228   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:57.223237   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:57.223242   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:57.225607   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:57.723489   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:57.723509   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:57.723518   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:57.723522   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:57.726165   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:58.224115   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:58.224142   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:58.224153   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.224157   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.226597   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:58.723496   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:58.723515   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:58.723523   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.723530   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.726205   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:59.224032   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:59.224052   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:59.224060   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:59.224064   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:59.226708   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:38:59.227328   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:38:59.723865   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:38:59.723887   58299 round_trippers.go:469] Request Headers:
	I0916 10:38:59.723895   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:59.723898   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:59.726433   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:00.223224   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:00.223245   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:00.223255   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:00.223260   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:00.225857   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:00.723626   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:00.723652   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:00.723661   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:00.723666   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:00.726165   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:01.223617   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:01.223643   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:01.223654   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:01.223661   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:01.226347   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:01.724239   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:01.724263   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:01.724273   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:01.724280   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:01.727378   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:01.727905   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:02.223152   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:02.223173   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:02.223181   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:02.223184   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:02.225887   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:02.723842   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:02.723872   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:02.723881   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:02.723886   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:02.726580   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:03.223540   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:03.223560   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:03.223568   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:03.223573   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:03.226137   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:03.724092   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:03.724115   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:03.724123   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:03.724130   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:03.726966   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:04.224077   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:04.224112   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:04.224130   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:04.224135   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:04.226790   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:04.227365   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:04.723906   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:04.723926   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:04.723934   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:04.723939   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:04.726497   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:05.223270   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:05.223294   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:05.223304   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:05.223311   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:05.225812   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:05.723595   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:05.723618   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:05.723626   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:05.723630   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:05.726445   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:06.223346   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:06.223372   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:06.223380   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:06.223384   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:06.225895   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:06.723788   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:06.723807   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:06.723815   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:06.723820   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:06.726545   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:06.727056   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:07.223292   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:07.223311   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:07.223319   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:07.223323   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:07.225982   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:07.723896   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:07.723924   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:07.723936   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:07.723943   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:07.726547   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:08.224121   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:08.224143   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:08.224150   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:08.224153   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:08.226570   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:08.723223   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:08.723243   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:08.723252   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:08.723255   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:08.726077   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:09.223947   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:09.223972   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:09.223980   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:09.223987   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:09.226816   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:09.227340   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:09.723981   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:09.724002   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:09.724010   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:09.724013   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:09.726901   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:10.223381   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:10.223401   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:10.223409   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:10.223413   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:10.226251   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:10.723998   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:10.724022   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:10.724031   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:10.724039   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:10.726605   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:11.223184   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:11.223203   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:11.223211   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:11.223221   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:11.225824   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:11.723612   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:11.723632   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:11.723640   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:11.723648   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:11.726455   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:11.726948   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:12.223255   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:12.223278   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:12.223287   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:12.223291   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:12.226201   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:12.724060   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:12.724079   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:12.724087   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:12.724090   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:12.726725   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:13.223507   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:13.223531   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:13.223542   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:13.223548   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:13.226403   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:13.723243   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:13.723264   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:13.723271   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:13.723275   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:13.726009   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:14.223889   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:14.223918   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:14.223928   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:14.223932   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:14.226364   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:14.226853   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:14.723299   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:14.723322   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:14.723330   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:14.723334   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:14.725924   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:15.223800   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:15.223821   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:15.223829   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:15.223834   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:15.226539   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:15.723428   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:15.723447   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:15.723455   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:15.723460   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:15.726258   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:16.224161   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:16.224182   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:16.224192   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:16.224197   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:16.227001   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:16.227548   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:16.723969   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:16.723991   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:16.723999   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:16.724004   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:16.726839   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:17.223345   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:17.223365   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:17.223373   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:17.223377   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:17.226010   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:17.723698   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:17.723718   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:17.723726   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:17.723733   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:17.726410   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:18.223220   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:18.223239   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:18.223246   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:18.223249   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:18.225859   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:18.723764   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:18.723789   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:18.723797   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:18.723802   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:18.726495   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:18.727013   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:19.223301   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:19.223322   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:19.223329   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:19.223333   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:19.226117   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:19.724078   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:19.724099   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:19.724107   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:19.724115   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:19.726978   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:20.223847   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:20.223875   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:20.223883   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:20.223887   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:20.226871   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:20.723656   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:20.723676   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:20.723684   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:20.723688   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:20.726302   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:21.224203   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:21.224224   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:21.224232   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:21.224240   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:21.226516   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:21.227012   58299 node_ready.go:53] node "ha-107957-m03" has status "Ready":"False"
	I0916 10:39:21.723971   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:21.723991   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:21.723998   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:21.724002   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:21.726538   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:22.223405   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:22.223425   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:22.223433   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:22.223437   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:22.226024   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:22.723921   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:22.723941   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:22.723949   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:22.723953   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:22.726717   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.223546   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:23.223569   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.223581   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.223587   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.226207   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.226800   58299 node_ready.go:49] node "ha-107957-m03" has status "Ready":"True"
	I0916 10:39:23.226836   58299 node_ready.go:38] duration metric: took 40.003852399s for node "ha-107957-m03" to be "Ready" ...
	I0916 10:39:23.226850   58299 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:39:23.226951   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:23.226964   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.226974   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.226979   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.232388   58299 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:39:23.240725   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.240829   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:39:23.240842   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.240852   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.240862   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.243164   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.243724   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:23.243741   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.243749   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.243754   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.246015   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.246557   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.246578   58299 pod_ready.go:82] duration metric: took 5.825605ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.246588   58299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.246665   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:39:23.246675   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.246684   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.246692   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.248794   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.249458   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:23.249473   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.249480   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.249483   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.251434   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.251951   58299 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.251972   58299 pod_ready.go:82] duration metric: took 5.374978ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.251984   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.252052   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:39:23.252063   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.252073   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.252080   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.254302   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.254805   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:23.254820   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.254828   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.254833   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.256635   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.257058   58299 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.257076   58299 pod_ready.go:82] duration metric: took 5.085871ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.257085   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.257136   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:39:23.257144   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.257150   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.257155   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.258999   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.259516   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:23.259533   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.259540   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.259544   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.261302   58299 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:39:23.261746   58299 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.261761   58299 pod_ready.go:82] duration metric: took 4.671567ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.261771   58299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.424168   58299 request.go:632] Waited for 162.31858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:39:23.424228   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:39:23.424234   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.424241   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.424246   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.426762   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.623726   58299 request.go:632] Waited for 196.272148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:23.623803   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:23.623813   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.623820   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.623824   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.626384   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:23.626824   58299 pod_ready.go:93] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:23.626842   58299 pod_ready.go:82] duration metric: took 365.065423ms for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.626862   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:23.824106   58299 request.go:632] Waited for 197.175106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:39:23.824199   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:39:23.824219   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:23.824233   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:23.824242   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:23.827180   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.024129   58299 request.go:632] Waited for 196.356662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:24.024198   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:24.024203   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.024211   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.024216   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.026781   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.027345   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:24.027366   58299 pod_ready.go:82] duration metric: took 400.494229ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.027379   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.224363   58299 request.go:632] Waited for 196.890278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:39:24.224424   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:39:24.224430   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.224438   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.224443   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.227132   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.424123   58299 request.go:632] Waited for 196.366355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:24.424220   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:24.424230   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.424241   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.424247   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.426764   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.427305   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:24.427327   58299 pod_ready.go:82] duration metric: took 399.940426ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.427340   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.624598   58299 request.go:632] Waited for 197.17129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:39:24.624660   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:39:24.624665   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.624673   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.624679   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.627797   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:24.823610   58299 request.go:632] Waited for 195.133821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:24.823673   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:24.823682   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:24.823692   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:24.823698   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:24.826160   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:24.826579   58299 pod_ready.go:93] pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:24.826597   58299 pod_ready.go:82] duration metric: took 399.250784ms for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:24.826608   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.023552   58299 request.go:632] Waited for 196.87134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:39:25.023607   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:39:25.023612   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.023620   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.023623   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.026543   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.223563   58299 request.go:632] Waited for 196.285225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:25.223627   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:25.223632   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.223640   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.223646   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.226158   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.226630   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:25.226650   58299 pod_ready.go:82] duration metric: took 400.034095ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.226663   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.423665   58299 request.go:632] Waited for 196.9218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:39:25.423752   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:39:25.423764   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.423776   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.423782   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.426729   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.623695   58299 request.go:632] Waited for 196.27248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:25.623760   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:25.623770   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.623781   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.623791   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.626267   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:25.626854   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:25.626875   58299 pod_ready.go:82] duration metric: took 400.203437ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.626892   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:25.823948   58299 request.go:632] Waited for 196.960808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:39:25.824005   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:39:25.824012   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:25.824024   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:25.824034   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:25.826859   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.023769   58299 request.go:632] Waited for 196.268704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.023845   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.023852   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.023863   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.023871   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.026444   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.026923   58299 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:26.026942   58299 pod_ready.go:82] duration metric: took 400.04067ms for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.026953   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.224044   58299 request.go:632] Waited for 197.015321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:39:26.224111   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:39:26.224123   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.224134   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.224140   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.226759   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.423736   58299 request.go:632] Waited for 196.372075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:26.423822   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:26.423834   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.423843   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.423850   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.426445   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.426998   58299 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:26.427021   58299 pod_ready.go:82] duration metric: took 400.06143ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.427032   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.623931   58299 request.go:632] Waited for 196.824798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:39:26.623990   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:39:26.623997   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.624007   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.624015   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.626765   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.824526   58299 request.go:632] Waited for 197.199798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.824601   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:26.824612   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:26.824622   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:26.824628   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:26.827165   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:26.827603   58299 pod_ready.go:93] pod "kube-proxy-f2scr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:26.827622   58299 pod_ready.go:82] duration metric: took 400.581271ms for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:26.827631   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.023783   58299 request.go:632] Waited for 196.042254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:39:27.023838   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:39:27.023844   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.023851   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.023855   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.026409   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.224334   58299 request.go:632] Waited for 197.34357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:27.224394   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:27.224399   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.224406   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.224410   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.226960   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.227494   58299 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:27.227513   58299 pod_ready.go:82] duration metric: took 399.869121ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.227523   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.423574   58299 request.go:632] Waited for 195.971296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:39:27.423665   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:39:27.423696   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.423705   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.423711   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.426388   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.624349   58299 request.go:632] Waited for 197.361421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:27.624417   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:39:27.624425   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.624433   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.624436   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.627058   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:27.627584   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:27.627606   58299 pod_ready.go:82] duration metric: took 400.075507ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.627620   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:27.823640   58299 request.go:632] Waited for 195.928112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:39:27.823734   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:39:27.823741   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:27.823751   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:27.823760   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:27.826521   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.024401   58299 request.go:632] Waited for 197.354153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:28.024474   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:39:28.024479   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.024487   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.024490   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.027115   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.027598   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:28.027622   58299 pod_ready.go:82] duration metric: took 399.991808ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:28.027634   58299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:28.224610   58299 request.go:632] Waited for 196.899296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:39:28.224698   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:39:28.224706   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.224717   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.224727   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.227365   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.424322   58299 request.go:632] Waited for 196.362994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:28.424391   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:39:28.424399   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.424409   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.424416   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.427305   58299 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:28.427942   58299 pod_ready.go:93] pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:39:28.427968   58299 pod_ready.go:82] duration metric: took 400.324894ms for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:39:28.427984   58299 pod_ready.go:39] duration metric: took 5.201116236s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:39:28.428018   58299 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:39:28.428111   58299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:39:28.440259   58299 api_server.go:72] duration metric: took 45.650305903s to wait for apiserver process to appear ...
	I0916 10:39:28.440286   58299 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:39:28.440319   58299 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:39:28.445420   58299 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:39:28.445496   58299 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:39:28.445503   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.445511   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.445517   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.446266   58299 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:39:28.446319   58299 api_server.go:141] control plane version: v1.31.1
	I0916 10:39:28.446336   58299 api_server.go:131] duration metric: took 6.043324ms to wait for apiserver health ...
	I0916 10:39:28.446345   58299 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:39:28.623701   58299 request.go:632] Waited for 177.249352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:28.623756   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:28.623761   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.623769   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.623774   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.628677   58299 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:28.635104   58299 system_pods.go:59] 24 kube-system pods found
	I0916 10:39:28.635147   58299 system_pods.go:61] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:39:28.635153   58299 system_pods.go:61] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:39:28.635156   58299 system_pods.go:61] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:39:28.635160   58299 system_pods.go:61] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:39:28.635168   58299 system_pods.go:61] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:39:28.635172   58299 system_pods.go:61] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:39:28.635175   58299 system_pods.go:61] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:39:28.635179   58299 system_pods.go:61] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:39:28.635183   58299 system_pods.go:61] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:39:28.635187   58299 system_pods.go:61] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:39:28.635192   58299 system_pods.go:61] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:39:28.635197   58299 system_pods.go:61] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:39:28.635203   58299 system_pods.go:61] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:39:28.635206   58299 system_pods.go:61] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:39:28.635209   58299 system_pods.go:61] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:39:28.635212   58299 system_pods.go:61] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:39:28.635215   58299 system_pods.go:61] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:39:28.635221   58299 system_pods.go:61] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:39:28.635226   58299 system_pods.go:61] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:39:28.635229   58299 system_pods.go:61] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:39:28.635234   58299 system_pods.go:61] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:39:28.635237   58299 system_pods.go:61] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:39:28.635242   58299 system_pods.go:61] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:39:28.635246   58299 system_pods.go:61] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:39:28.635252   58299 system_pods.go:74] duration metric: took 188.899196ms to wait for pod list to return data ...
	I0916 10:39:28.635261   58299 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:39:28.823594   58299 request.go:632] Waited for 188.251858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:39:28.823646   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:39:28.823651   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:28.823658   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:28.823662   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:28.826719   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:28.826847   58299 default_sa.go:45] found service account: "default"
	I0916 10:39:28.826861   58299 default_sa.go:55] duration metric: took 191.593552ms for default service account to be created ...
	I0916 10:39:28.826868   58299 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:39:29.024332   58299 request.go:632] Waited for 197.387174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:29.024398   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:39:29.024406   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:29.024416   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:29.024430   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:29.029545   58299 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:39:29.035853   58299 system_pods.go:86] 24 kube-system pods found
	I0916 10:39:29.035881   58299 system_pods.go:89] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:39:29.035888   58299 system_pods.go:89] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:39:29.035892   58299 system_pods.go:89] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:39:29.035896   58299 system_pods.go:89] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:39:29.035901   58299 system_pods.go:89] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:39:29.035905   58299 system_pods.go:89] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:39:29.035910   58299 system_pods.go:89] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:39:29.035914   58299 system_pods.go:89] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:39:29.035918   58299 system_pods.go:89] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:39:29.035922   58299 system_pods.go:89] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:39:29.035925   58299 system_pods.go:89] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:39:29.035929   58299 system_pods.go:89] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:39:29.035933   58299 system_pods.go:89] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:39:29.035937   58299 system_pods.go:89] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:39:29.035941   58299 system_pods.go:89] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:39:29.035944   58299 system_pods.go:89] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:39:29.035948   58299 system_pods.go:89] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:39:29.035951   58299 system_pods.go:89] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:39:29.035954   58299 system_pods.go:89] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:39:29.035958   58299 system_pods.go:89] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:39:29.035961   58299 system_pods.go:89] "kube-vip-ha-107957" [f6ff7681-062a-4c0b-a621-4b5c3079ee99] Running
	I0916 10:39:29.035966   58299 system_pods.go:89] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:39:29.035969   58299 system_pods.go:89] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:39:29.035972   58299 system_pods.go:89] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:39:29.035979   58299 system_pods.go:126] duration metric: took 209.105667ms to wait for k8s-apps to be running ...
	I0916 10:39:29.035996   58299 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:39:29.036044   58299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:39:29.046827   58299 system_svc.go:56] duration metric: took 10.82024ms WaitForService to wait for kubelet
	I0916 10:39:29.046857   58299 kubeadm.go:582] duration metric: took 46.256910268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:39:29.046891   58299 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:39:29.224236   58299 request.go:632] Waited for 177.251294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:39:29.224304   58299 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:39:29.224314   58299 round_trippers.go:469] Request Headers:
	I0916 10:39:29.224323   58299 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:29.224332   58299 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:29.227796   58299 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:29.228723   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:39:29.228764   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:39:29.228795   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:39:29.228801   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:39:29.228807   58299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:39:29.228813   58299 node_conditions.go:123] node cpu capacity is 8
	I0916 10:39:29.228822   58299 node_conditions.go:105] duration metric: took 181.924487ms to run NodePressure ...
	I0916 10:39:29.228842   58299 start.go:241] waiting for startup goroutines ...
	I0916 10:39:29.228872   58299 start.go:255] writing updated cluster config ...
	I0916 10:39:29.229288   58299 ssh_runner.go:195] Run: rm -f paused
	I0916 10:39:29.236462   58299 out.go:177] * Done! kubectl is now configured to use "ha-107957" cluster and "default" namespace by default
	E0916 10:39:29.237717   58299 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:37:36 ha-107957 crio[1034]: time="2024-09-16 10:37:36.244431248Z" level=info msg="Created container 2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a: kube-system/coredns-7c65d6cfc9-mhp28/coredns" id=1bdfb496-6253-422a-af30-dc700b4b48bd name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:37:36 ha-107957 crio[1034]: time="2024-09-16 10:37:36.244970827Z" level=info msg="Starting container: 2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a" id=b7193082-2d26-4f45-9b05-5daccd29ccd3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:37:36 ha-107957 crio[1034]: time="2024-09-16 10:37:36.299215725Z" level=info msg="Started container" PID=2390 containerID=2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a description=kube-system/coredns-7c65d6cfc9-mhp28/coredns id=b7193082-2d26-4f45-9b05-5daccd29ccd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7174b9a3e70964062e8b18263b30732ccbb5b458d5b4b2a807bbda9cdd79b329
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.433727831Z" level=info msg="Running pod sandbox: default/busybox-7dff88458-m2jh6/POD" id=5e156903-b707-458c-ad93-55d4d43a105f name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.433820948Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.449846263Z" level=info msg="Got pod network &{Name:busybox-7dff88458-m2jh6 Namespace:default ID:710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843 UID:a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8 NetNS:/var/run/netns/d062af54-eab5-468c-8a07-0a5ecd9b1c93 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.449882965Z" level=info msg="Adding pod default_busybox-7dff88458-m2jh6 to CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.462648174Z" level=info msg="Got pod network &{Name:busybox-7dff88458-m2jh6 Namespace:default ID:710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843 UID:a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8 NetNS:/var/run/netns/d062af54-eab5-468c-8a07-0a5ecd9b1c93 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.462769789Z" level=info msg="Checking pod default_busybox-7dff88458-m2jh6 for CNI network kindnet (type=ptp)"
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.465905378Z" level=info msg="Ran pod sandbox 710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843 with infra container: default/busybox-7dff88458-m2jh6/POD" id=5e156903-b707-458c-ad93-55d4d43a105f name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.467141808Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=937ca887-d235-42f7-a574-a480e24f85f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.467375671Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=937ca887-d235-42f7-a574-a480e24f85f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.468063188Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=aac48b61-89d2-456c-a77c-6ad2faaf9158 name=/runtime.v1.ImageService/PullImage
	Sep 16 10:39:30 ha-107957 crio[1034]: time="2024-09-16 10:39:30.469110073Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:39:31 ha-107957 crio[1034]: time="2024-09-16 10:39:31.299742855Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.599930304Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=aac48b61-89d2-456c-a77c-6ad2faaf9158 name=/runtime.v1.ImageService/PullImage
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.600660737Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=9d39e8d1-6625-49c6-8a4c-12b60b1f5501 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.601278863Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9d39e8d1-6625-49c6-8a4c-12b60b1f5501 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.602624202Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c60851b0-ce9f-4127-bbb0-fdaca731deaa name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.603317032Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c60851b0-ce9f-4127-bbb0-fdaca731deaa name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.604260689Z" level=info msg="Creating container: default/busybox-7dff88458-m2jh6/busybox" id=3284e0d5-cc54-445e-9942-8534f7174e52 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.604377206Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.667673390Z" level=info msg="Created container 861381147b229f211fe3711140a60ff3444297d9705cd5049aa5576eef625468: default/busybox-7dff88458-m2jh6/busybox" id=3284e0d5-cc54-445e-9942-8534f7174e52 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.668454910Z" level=info msg="Starting container: 861381147b229f211fe3711140a60ff3444297d9705cd5049aa5576eef625468" id=8a6d1b44-8038-4396-ba62-ff83c05cdf8e name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:39:33 ha-107957 crio[1034]: time="2024-09-16 10:39:33.674514851Z" level=info msg="Started container" PID=2641 containerID=861381147b229f211fe3711140a60ff3444297d9705cd5049aa5576eef625468 description=default/busybox-7dff88458-m2jh6/busybox id=8a6d1b44-8038-4396-ba62-ff83c05cdf8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=710de54c88a1ba1855da0ef0724e031f59bef7ed77aea4ca7f5b6eb012824843
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	861381147b229       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   0                   710de54c88a1b       busybox-7dff88458-m2jh6
	2812c05cbb819       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   0                   7174b9a3e7096       coredns-7c65d6cfc9-mhp28
	6d2579e1933da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Running             storage-provisioner       0                   4f87c81927aed       storage-provisioner
	e70b0d4efee19       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   0                   4993c49192681       coredns-7c65d6cfc9-t9xdr
	961b9339405b0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago        Running             kube-proxy                0                   e9b91b2749be8       kube-proxy-5ctr8
	70b5c5b4e1dc3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago        Running             kindnet-cni               0                   b4bf04ff45396       kindnet-rwcs2
	77ff8efc10fe1       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     3 minutes ago        Running             kube-vip                  0                   25ff40ebef580       kube-vip-ha-107957
	5962366f88b6f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago        Running             kube-scheduler            0                   dcd27af89531d       kube-scheduler-ha-107957
	b1d6cc64c9b2c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago        Running             kube-apiserver            0                   1adf66d5a6d51       kube-apiserver-ha-107957
	7e57abaf77dbc       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago        Running             kube-controller-manager   0                   774dc2301fff2       kube-controller-manager-ha-107957
	2481bf9216b4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      0                   194127e61d89d       etcd-ha-107957
	
	
	==> coredns [2812c05cbb819fba02026f853f56bf72103333b063d4ca9d8556a1a9ba9ea62a] <==
	[INFO] 10.244.2.2:46793 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004296567s
	[INFO] 10.244.2.2:43063 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153947s
	[INFO] 10.244.2.2:46086 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0169607s
	[INFO] 10.244.2.2:54094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144006s
	[INFO] 10.244.0.4:44197 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122548s
	[INFO] 10.244.0.4:51311 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002080567s
	[INFO] 10.244.0.4:43617 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078863s
	[INFO] 10.244.1.2:53583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153821s
	[INFO] 10.244.1.2:42615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001661333s
	[INFO] 10.244.1.2:39797 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086687s
	[INFO] 10.244.1.2:54605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151286s
	[INFO] 10.244.2.2:43370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00023735s
	[INFO] 10.244.2.2:41422 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100456s
	[INFO] 10.244.2.2:39218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108926s
	[INFO] 10.244.0.4:60314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082915s
	[INFO] 10.244.1.2:41042 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137109s
	[INFO] 10.244.1.2:48817 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116903s
	[INFO] 10.244.1.2:45958 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088746s
	[INFO] 10.244.2.2:54916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157262s
	[INFO] 10.244.2.2:42021 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148398s
	[INFO] 10.244.2.2:48014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123643s
	[INFO] 10.244.2.2:38833 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108016s
	[INFO] 10.244.0.4:41677 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128554s
	[INFO] 10.244.0.4:54618 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075484s
	[INFO] 10.244.1.2:42614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000081244s
	
	
	==> coredns [e70b0d4efee19ff2bd834f86c91dd591952f5e8561c4f155b13c60ed04c3210a] <==
	[INFO] 10.244.2.2:37520 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010271283s
	[INFO] 10.244.0.4:34086 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118588s
	[INFO] 10.244.0.4:50400 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000087455s
	[INFO] 10.244.2.2:33492 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158642s
	[INFO] 10.244.2.2:40447 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163329s
	[INFO] 10.244.2.2:55339 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136729s
	[INFO] 10.244.0.4:34312 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164115s
	[INFO] 10.244.0.4:44393 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106766s
	[INFO] 10.244.0.4:54524 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587719s
	[INFO] 10.244.0.4:37539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078422s
	[INFO] 10.244.0.4:55884 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090481s
	[INFO] 10.244.1.2:56325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001976108s
	[INFO] 10.244.1.2:35999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097425s
	[INFO] 10.244.1.2:58242 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085904s
	[INFO] 10.244.1.2:39966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080925s
	[INFO] 10.244.2.2:44398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014975s
	[INFO] 10.244.0.4:57559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149084s
	[INFO] 10.244.0.4:37522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057221s
	[INFO] 10.244.0.4:32815 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145244s
	[INFO] 10.244.1.2:43015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152009s
	[INFO] 10.244.0.4:33260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000146393s
	[INFO] 10.244.0.4:45907 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011862s
	[INFO] 10.244.1.2:41436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136888s
	[INFO] 10.244.1.2:56800 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131119s
	[INFO] 10.244.1.2:48525 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098212s
	
	
	==> describe nodes <==
	Name:               ha-107957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:39:52 +0000   Mon, 16 Sep 2024 10:37:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-107957
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 82180a11932f4b1fb524fbc706471f86
	  System UUID:                4b3cbb31-41b2-4aeb-852f-1a17b0b6a69f
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m2jh6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7c65d6cfc9-mhp28             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m39s
	  kube-system                 coredns-7c65d6cfc9-t9xdr             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m39s
	  kube-system                 etcd-ha-107957                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m44s
	  kube-system                 kindnet-rwcs2                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m39s
	  kube-system                 kube-apiserver-ha-107957             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-controller-manager-ha-107957    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-proxy-5ctr8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 kube-scheduler-ha-107957             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-vip-ha-107957                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 3m38s  kube-proxy       
	  Normal   Starting                 3m44s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m44s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m44s  kubelet          Node ha-107957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m44s  kubelet          Node ha-107957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m44s  kubelet          Node ha-107957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m40s  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   NodeReady                3m28s  kubelet          Node ha-107957 status is now: NodeReady
	  Normal   RegisteredNode           3m18s  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           2m15s  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	
	
	Name:               ha-107957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:40:54 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:40:54 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:40:54 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:40:54 +0000   Mon, 16 Sep 2024 10:38:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-107957-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 adfab5a587834e8c9326418cd2577b68
	  System UUID:                15471af5-ad40-4515-bf0c-79f0cc3f164e
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-plmdj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 etcd-ha-107957-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m25s
	  kube-system                 kindnet-sjkjx                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m26s
	  kube-system                 kube-apiserver-ha-107957-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 kube-controller-manager-ha-107957-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 kube-proxy-qtxh9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 kube-scheduler-ha-107957-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 kube-vip-ha-107957-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m23s                  kube-proxy       
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   NodeHasSufficientMemory  3m26s (x8 over 3m26s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m26s (x8 over 3m26s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m26s (x7 over 3m26s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m25s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           2m15s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   Starting                 22s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 22s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)      kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)      kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)      kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	
	
	Name:               ha-107957-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:41:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:38:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:38:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:38:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:39:41 +0000   Mon, 16 Sep 2024 10:39:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-107957-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 e81e765d559f41f895dd17c226607233
	  System UUID:                66298d02-b2ec-4333-986a-47e548dee112
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rfjs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 etcd-ha-107957-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m22s
	  kube-system                 kindnet-rcsxv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-ha-107957-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-ha-107957-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-f2scr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-ha-107957-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-vip-ha-107957-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  RegisteredNode           2m23s                  node-controller  Node ha-107957-m03 event: Registered Node ha-107957-m03 in Controller
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node ha-107957-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node ha-107957-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x7 over 2m23s)  kubelet          Node ha-107957-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-107957-m03 event: Registered Node ha-107957-m03 in Controller
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-107957-m03 event: Registered Node ha-107957-m03 in Controller
	
	
	Name:               ha-107957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:41:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:39:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:39:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:39:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:40:02 +0000   Mon, 16 Sep 2024 10:40:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-107957-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 71accbb4b2bc4cd5b4c754c38afdb6f6
	  System UUID:                85f6a07b-6b9f-43fc-98ae-305e46935522
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4lkzl       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-proxy-hm8zn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 71s                kube-proxy       
	  Normal   RegisteredNode           73s                node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  73s (x2 over 73s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x2 over 73s)  kubelet          Node ha-107957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x2 over 73s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           70s                node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           70s                node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   NodeReady                61s                kubelet          Node ha-107957-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001479]  #6
	[  +0.001580]  #7
	[  +0.071564] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.432637] i8042: Warning: Keylock active
	[  +0.008596] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004682] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000799] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000721] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001027] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.620419] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.097838] systemd[1]: Configuration file /etc/systemd/system/auto-pause.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.148581] kauditd_printk_skb: 46 callbacks suppressed
	[Sep16 10:35] FS-Cache: Duplicate cookie detected
	[  +0.005031] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000007485c404{9P.session} n=000000002b39a795
	[  +0.007541] FS-Cache: O-key=[10] '34323935313533303732'
	[  +0.005370] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.006617] FS-Cache: N-cookie d=000000007485c404{9P.session} n=00000000364f9863
	[  +0.008939] FS-Cache: N-key=[10] '34323935313533303732'
	[ +14.884982] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [2481bf9216b4b36d1f0f3dd6f17b92cfbfc43b6eebff3f320009c9f040ead512] <==
	{"level":"info","ts":"2024-09-16T10:40:47.515812Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01"}
	{"level":"warn","ts":"2024-09-16T10:40:47.876832Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0ea00fb31119a01","rtt":"473.878µs","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:47.876863Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0ea00fb31119a01","rtt":"6.810435ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:49.701990Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01","error":"EOF"}
	{"level":"warn","ts":"2024-09-16T10:40:49.702133Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01","error":"EOF"}
	{"level":"warn","ts":"2024-09-16T10:40:49.707755Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b0ea00fb31119a01","error":"failed to dial b0ea00fb31119a01 on stream MsgApp v2 (EOF)"}
	{"level":"warn","ts":"2024-09-16T10:40:49.817995Z","caller":"rafthttp/stream.go:223","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01"}
	{"level":"warn","ts":"2024-09-16T10:40:50.058205Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"b0ea00fb31119a01","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:50.058259Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b0ea00fb31119a01","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:52.877128Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0ea00fb31119a01","rtt":"473.878µs","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:52.877173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0ea00fb31119a01","rtt":"6.810435ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:54.059277Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"b0ea00fb31119a01","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:54.059333Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b0ea00fb31119a01","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:54.539905Z","caller":"rafthttp/stream.go:194","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01"}
	{"level":"warn","ts":"2024-09-16T10:40:57.877406Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0ea00fb31119a01","rtt":"6.810435ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:57.877440Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0ea00fb31119a01","rtt":"473.878µs","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:58.060756Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"b0ea00fb31119a01","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:40:58.060810Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b0ea00fb31119a01","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-16T10:41:01.048907Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b0ea00fb31119a01"}
	{"level":"info","ts":"2024-09-16T10:41:01.049025Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01"}
	{"level":"info","ts":"2024-09-16T10:41:01.049567Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01"}
	{"level":"info","ts":"2024-09-16T10:41:01.053100Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"b0ea00fb31119a01","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:41:01.053141Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01"}
	{"level":"info","ts":"2024-09-16T10:41:01.096883Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"b0ea00fb31119a01","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:41:01.096937Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b0ea00fb31119a01"}
	
	
	==> kernel <==
	 10:41:03 up 23 min,  0 users,  load average: 1.50, 0.96, 0.58
	Linux ha-107957 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [70b5c5b4e1dc30a22cf6cb15f81f3a486629e5aed5aca6e9dd70ad00dcc0acf4] <==
	I0916 10:40:25.594330       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:40:35.595777       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:40:35.595814       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:40:35.595958       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:40:35.595968       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:40:35.596008       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:40:35.596015       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:40:35.596054       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:35.596062       1 main.go:299] handling current node
	I0916 10:40:45.601963       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:45.601995       1 main.go:299] handling current node
	I0916 10:40:45.602010       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:40:45.602015       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:40:45.602133       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:40:45.602143       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:40:45.602184       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:40:45.602188       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:40:55.594606       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:55.594672       1 main.go:299] handling current node
	I0916 10:40:55.594688       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:40:55.594694       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:40:55.594818       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:40:55.594826       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:40:55.594865       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:40:55.594871       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b1d6cc64c9b2c6f964d9cfedd269b3427f97e09a546dab8177407bdf75af651a] <==
	I0916 10:37:18.323387       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:37:18.332803       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:37:18.334005       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:37:18.338869       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:37:18.735773       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:37:19.463510       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:37:19.474882       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:37:19.665941       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:37:24.237857       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:37:24.286933       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 10:39:35.076855       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48050: use of closed network connection
	E0916 10:39:35.228839       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48058: use of closed network connection
	E0916 10:39:35.384883       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48074: use of closed network connection
	E0916 10:39:35.574724       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48098: use of closed network connection
	E0916 10:39:35.730342       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48106: use of closed network connection
	E0916 10:39:35.886083       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48116: use of closed network connection
	E0916 10:39:36.040362       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48126: use of closed network connection
	E0916 10:39:36.189038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48136: use of closed network connection
	E0916 10:39:36.336733       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48156: use of closed network connection
	E0916 10:39:36.602543       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48174: use of closed network connection
	E0916 10:39:36.750671       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48184: use of closed network connection
	E0916 10:39:36.899981       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48202: use of closed network connection
	E0916 10:39:37.053525       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48222: use of closed network connection
	E0916 10:39:37.213471       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48248: use of closed network connection
	E0916 10:39:37.363232       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48270: use of closed network connection
	
	
	==> kube-controller-manager [7e57abaf77dbcd8ae424e058d867ae32d9eebd67469026700eb14494673d5bd9] <==
	I0916 10:39:41.752924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m03"
	E0916 10:39:50.271978       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-csg8t failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-csg8t\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 10:39:50.419304       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-107957-m04\" does not exist"
	I0916 10:39:50.439738       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-107957-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:39:50.439781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:50.440740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:50.839625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:50.949734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:51.088693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:52.349400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957"
	I0916 10:39:53.254355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:53.360861       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:53.478084       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-107957-m04"
	I0916 10:39:53.479190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:39:53.541957       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:00.658417       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:02.371311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-107957-m04"
	I0916 10:40:02.371722       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:02.384368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:03.265953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:40:54.627224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m02"
	I0916 10:40:58.920994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.802746ms"
	I0916 10:40:58.921352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.843µs"
	I0916 10:41:00.090840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.906581ms"
	I0916 10:41:00.091004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="92.438µs"
	
	
	==> kube-proxy [961b9339405b05241fd3024c31a7114d64af8103178defd87467d05e162333dd] <==
	I0916 10:37:25.031622       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:37:25.222101       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:37:25.222169       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:37:25.243893       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:37:25.243973       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:37:25.245955       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:37:25.246245       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:37:25.246273       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:37:25.247638       1 config.go:199] "Starting service config controller"
	I0916 10:37:25.247684       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:37:25.248012       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:37:25.248043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:37:25.248076       1 config.go:328] "Starting node config controller"
	I0916 10:37:25.248081       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:37:25.348839       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:37:25.348869       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:37:25.348888       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5962366f88b6f02c398ff89c07e8f8193763da0e0ff16d3f31f2f8e5d57c573b] <==
	W0916 10:37:16.811838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:37:16.811861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.630129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:37:17.630178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.670046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:37:17.670093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.676785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:37:17.676828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.792440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:37:17.792492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:37:17.864545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:37:17.864602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:37:18.407967       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:38:40.365719       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-62bx2\": pod kube-proxy-62bx2 is already assigned to node \"ha-107957-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-62bx2" node="ha-107957-m03"
	E0916 10:38:40.365840       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b04f58c1-710b-4602-88c4-ce46ad218d6a(kube-system/kube-proxy-62bx2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-62bx2"
	E0916 10:38:40.365867       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-62bx2\": pod kube-proxy-62bx2 is already assigned to node \"ha-107957-m03\"" pod="kube-system/kube-proxy-62bx2"
	I0916 10:38:40.365891       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-62bx2" node="ha-107957-m03"
	E0916 10:38:40.370067       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7bkf8\": pod kindnet-7bkf8 is already assigned to node \"ha-107957-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-7bkf8" node="ha-107957-m03"
	E0916 10:38:40.370228       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d577df2e-0955-4d71-ad76-410167df4a18(kube-system/kindnet-7bkf8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7bkf8"
	E0916 10:38:40.370258       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7bkf8\": pod kindnet-7bkf8 is already assigned to node \"ha-107957-m03\"" pod="kube-system/kindnet-7bkf8"
	I0916 10:38:40.370283       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7bkf8" node="ha-107957-m03"
	E0916 10:39:50.454329       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hm8zn\": pod kube-proxy-hm8zn is already assigned to node \"ha-107957-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hm8zn" node="ha-107957-m04"
	E0916 10:39:50.454395       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6ea6916e-f34c-42b3-996b-033915687fd1(kube-system/kube-proxy-hm8zn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hm8zn"
	E0916 10:39:50.454412       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hm8zn\": pod kube-proxy-hm8zn is already assigned to node \"ha-107957-m04\"" pod="kube-system/kube-proxy-hm8zn"
	I0916 10:39:50.454434       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hm8zn" node="ha-107957-m04"
	
	
	==> kubelet <==
	Sep 16 10:39:19 ha-107957 kubelet[1727]: E0916 10:39:19.612375    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483159612163714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:29 ha-107957 kubelet[1727]: E0916 10:39:29.613939    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483169613692411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:29 ha-107957 kubelet[1727]: E0916 10:39:29.613981    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483169613692411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137822,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:30 ha-107957 kubelet[1727]: I0916 10:39:30.195838    1727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fw56\" (UniqueName: \"kubernetes.io/projected/a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8-kube-api-access-2fw56\") pod \"busybox-7dff88458-m2jh6\" (UID: \"a43b7850-fcaa-4ca6-a5d0-c04bf031e2e8\") " pod="default/busybox-7dff88458-m2jh6"
	Sep 16 10:39:33 ha-107957 kubelet[1727]: I0916 10:39:33.778047    1727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-m2jh6" podStartSLOduration=0.644047761 podStartE2EDuration="3.778021401s" podCreationTimestamp="2024-09-16 10:39:30 +0000 UTC" firstStartedPulling="2024-09-16 10:39:30.467563681 +0000 UTC m=+131.047799538" lastFinishedPulling="2024-09-16 10:39:33.601537309 +0000 UTC m=+134.181773178" observedRunningTime="2024-09-16 10:39:33.777923058 +0000 UTC m=+134.358158932" watchObservedRunningTime="2024-09-16 10:39:33.778021401 +0000 UTC m=+134.358257278"
	Sep 16 10:39:35 ha-107957 kubelet[1727]: E0916 10:39:35.228848    1727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54902->127.0.0.1:43613: write tcp 127.0.0.1:54902->127.0.0.1:43613: write: broken pipe
	Sep 16 10:39:36 ha-107957 kubelet[1727]: E0916 10:39:36.899930    1727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54924->127.0.0.1:43613: write tcp 127.0.0.1:54924->127.0.0.1:43613: write: broken pipe
	Sep 16 10:39:39 ha-107957 kubelet[1727]: E0916 10:39:39.615163    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483179614951012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:39 ha-107957 kubelet[1727]: E0916 10:39:39.615201    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483179614951012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:49 ha-107957 kubelet[1727]: E0916 10:39:49.616373    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483189616178160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:49 ha-107957 kubelet[1727]: E0916 10:39:49.616411    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483189616178160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:59 ha-107957 kubelet[1727]: E0916 10:39:59.617714    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483199617503803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:39:59 ha-107957 kubelet[1727]: E0916 10:39:59.617758    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483199617503803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:09 ha-107957 kubelet[1727]: E0916 10:40:09.619148    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483209618940894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:09 ha-107957 kubelet[1727]: E0916 10:40:09.619193    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483209618940894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:19 ha-107957 kubelet[1727]: E0916 10:40:19.620449    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483219620186776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:19 ha-107957 kubelet[1727]: E0916 10:40:19.620494    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483219620186776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:29 ha-107957 kubelet[1727]: E0916 10:40:29.621565    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483229621318004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:29 ha-107957 kubelet[1727]: E0916 10:40:29.621611    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483229621318004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:39 ha-107957 kubelet[1727]: E0916 10:40:39.622575    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483239622363638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:39 ha-107957 kubelet[1727]: E0916 10:40:39.622617    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483239622363638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:49 ha-107957 kubelet[1727]: E0916 10:40:49.623821    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483249623614138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:49 ha-107957 kubelet[1727]: E0916 10:40:49.623868    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483249623614138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:59 ha-107957 kubelet[1727]: E0916 10:40:59.624876    1727 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483259624701802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:40:59 ha-107957 kubelet[1727]: E0916 10:40:59.624923    1727 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483259624701802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-107957 -n ha-107957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (442.716µs)
helpers_test.go:263: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (23.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 node delete m03 -v=7 --alsologtostderr
E0916 10:45:02.443935   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 node delete m03 -v=7 --alsologtostderr: (10.571292927s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:511: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (464.95µs)
ha_test.go:513: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-107957
helpers_test.go:235: (dbg) docker inspect ha-107957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd",
	        "Created": "2024-09-16T10:37:05.006225665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 84603,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:41:42.187385946Z",
	            "FinishedAt": "2024-09-16T10:41:41.499313783Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hosts",
	        "LogPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd-json.log",
	        "Name": "/ha-107957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-107957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-107957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-107957",
	                "Source": "/var/lib/docker/volumes/ha-107957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-107957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-107957",
	                "name.minikube.sigs.k8s.io": "ha-107957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b54c677d3bb9d850ed7d2c52d454eca00c1fd308702011ba63cf5528270f8f9",
	            "SandboxKey": "/var/run/docker/netns/8b54c677d3bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-107957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1162a04f8fb0eca4f56c515332b1b6b72501106e380521da303a5999505b78f5",
	                    "EndpointID": "d27fcbd9adf1b31453bd0b8b8e35195cc425639e06d1c032f01e4f04464a6822",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-107957",
	                        "8934c54a2cf0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-107957 -n ha-107957
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 logs -n 25: (1.477184309s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m02 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m03_ha-107957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04:/home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m04 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp testdata/cp-test.txt                                               | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile432092999/001/cp-test_ha-107957-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957:/home/docker/cp-test_ha-107957-m04_ha-107957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957 sudo cat                                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957.txt                                |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m02:/home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m02 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03:/home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m03 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-107957 node stop m02 -v=7                                                    | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-107957 node start m02 -v=7                                                   | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:41 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-107957 -v=7                                                          | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-107957 -v=7                                                               | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-107957 --wait=true -v=7                                                   | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-107957                                                               | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC |                     |
	| node    | ha-107957 node delete m03 -v=7                                                  | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:41:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:41:41.810860   84300 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:41:41.811137   84300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:41.811147   84300 out.go:358] Setting ErrFile to fd 2...
	I0916 10:41:41.811151   84300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:41.811334   84300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:41:41.811908   84300 out.go:352] Setting JSON to false
	I0916 10:41:41.812919   84300 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1442,"bootTime":1726481860,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:41:41.813010   84300 start.go:139] virtualization: kvm guest
	I0916 10:41:41.815473   84300 out.go:177] * [ha-107957] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:41:41.816999   84300 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:41:41.817044   84300 notify.go:220] Checking for updates...
	I0916 10:41:41.819254   84300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:41:41.820501   84300 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:41:41.821638   84300 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:41:41.822911   84300 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:41:41.824361   84300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:41:41.826003   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:41.826138   84300 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:41:41.850128   84300 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:41:41.850270   84300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:41:41.904630   84300 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:41:41.894373071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:41:41.904767   84300 docker.go:318] overlay module found
	I0916 10:41:41.906837   84300 out.go:177] * Using the docker driver based on existing profile
	I0916 10:41:41.908233   84300 start.go:297] selected driver: docker
	I0916 10:41:41.908262   84300 start.go:901] validating driver "docker" against &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:41.908445   84300 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:41:41.908547   84300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:41:41.960084   84300 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:41:41.949843993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:41:41.960678   84300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:41.960705   84300 cni.go:84] Creating CNI manager for ""
	I0916 10:41:41.960732   84300 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:41:41.960781   84300 start.go:340] cluster config:
	{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:41.962841   84300 out.go:177] * Starting "ha-107957" primary control-plane node in "ha-107957" cluster
	I0916 10:41:41.964331   84300 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:41:41.965650   84300 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:41:41.966949   84300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:41:41.966999   84300 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:41:41.967017   84300 cache.go:56] Caching tarball of preloaded images
	I0916 10:41:41.967027   84300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:41:41.967109   84300 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:41:41.967125   84300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:41:41.967254   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:41:41.986972   84300 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:41:41.986994   84300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:41:41.987063   84300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:41:41.987085   84300 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:41:41.987093   84300 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:41:41.987100   84300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:41:41.987108   84300 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:41:41.988347   84300 image.go:273] response: 
	I0916 10:41:42.048500   84300 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:41:42.048543   84300 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:41:42.048594   84300 start.go:360] acquireMachinesLock for ha-107957: {Name:mkd47d2ce5dbb0c6b4cd5ea9479cc8820c855026 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:41:42.048672   84300 start.go:364] duration metric: took 52.772µs to acquireMachinesLock for "ha-107957"
	I0916 10:41:42.048696   84300 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:41:42.048704   84300 fix.go:54] fixHost starting: 
	I0916 10:41:42.048920   84300 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:41:42.066163   84300 fix.go:112] recreateIfNeeded on ha-107957: state=Stopped err=<nil>
	W0916 10:41:42.066194   84300 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:41:42.068404   84300 out.go:177] * Restarting existing docker container for "ha-107957" ...
	I0916 10:41:42.070004   84300 cli_runner.go:164] Run: docker start ha-107957
	I0916 10:41:42.362323   84300 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:41:42.381861   84300 kic.go:430] container "ha-107957" state is running.
	I0916 10:41:42.382300   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:41:42.401642   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:41:42.401875   84300 machine.go:93] provisionDockerMachine start ...
	I0916 10:41:42.401931   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:42.420280   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:42.420558   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0916 10:41:42.420602   84300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:41:42.421209   84300 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53866->127.0.0.1:32808: read: connection reset by peer
	I0916 10:41:45.553123   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:41:45.553154   84300 ubuntu.go:169] provisioning hostname "ha-107957"
	I0916 10:41:45.553223   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:45.571231   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:45.571401   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0916 10:41:45.571413   84300 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957 && echo "ha-107957" | sudo tee /etc/hostname
	I0916 10:41:45.712940   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:41:45.713018   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:45.730596   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:45.730780   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0916 10:41:45.730840   84300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:41:45.861537   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:41:45.861566   84300 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:41:45.861589   84300 ubuntu.go:177] setting up certificates
	I0916 10:41:45.861601   84300 provision.go:84] configureAuth start
	I0916 10:41:45.861661   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:41:45.879468   84300 provision.go:143] copyHostCerts
	I0916 10:41:45.879503   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:41:45.879530   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:41:45.879535   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:41:45.879595   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:41:45.879674   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:41:45.879692   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:41:45.879696   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:41:45.879718   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:41:45.879763   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:41:45.879779   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:41:45.879785   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:41:45.879804   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:41:45.879862   84300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957 san=[127.0.0.1 192.168.49.2 ha-107957 localhost minikube]
	I0916 10:41:45.936355   84300 provision.go:177] copyRemoteCerts
	I0916 10:41:45.936424   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:41:45.936457   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:45.954586   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:41:46.054211   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:41:46.054266   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:41:46.076094   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:41:46.076164   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:41:46.098241   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:41:46.098303   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:41:46.120545   84300 provision.go:87] duration metric: took 258.929744ms to configureAuth
	I0916 10:41:46.120595   84300 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:41:46.120809   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:46.120910   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:46.138215   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:46.138418   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0916 10:41:46.138439   84300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:41:46.481400   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:41:46.481429   84300 machine.go:96] duration metric: took 4.079539557s to provisionDockerMachine
	I0916 10:41:46.481443   84300 start.go:293] postStartSetup for "ha-107957" (driver="docker")
	I0916 10:41:46.481453   84300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:41:46.481519   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:41:46.481568   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:46.500861   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:41:46.594483   84300 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:41:46.597597   84300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:41:46.597627   84300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:41:46.597635   84300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:41:46.597641   84300 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:41:46.597651   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:41:46.597706   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:41:46.597793   84300 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:41:46.597805   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:41:46.597895   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:41:46.605617   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:41:46.627111   84300 start.go:296] duration metric: took 145.653976ms for postStartSetup
	I0916 10:41:46.627202   84300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:41:46.627262   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:46.644838   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:41:46.734219   84300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:41:46.738453   84300 fix.go:56] duration metric: took 4.689743176s for fixHost
	I0916 10:41:46.738486   84300 start.go:83] releasing machines lock for "ha-107957", held for 4.689800659s
	I0916 10:41:46.738560   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:41:46.756039   84300 ssh_runner.go:195] Run: cat /version.json
	I0916 10:41:46.756097   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:46.756149   84300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:41:46.756218   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:46.774409   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:41:46.774472   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:41:46.948739   84300 ssh_runner.go:195] Run: systemctl --version
	I0916 10:41:46.952877   84300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:41:47.090461   84300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:41:47.094981   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:41:47.103183   84300 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:41:47.103246   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:41:47.111902   84300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:41:47.111935   84300 start.go:495] detecting cgroup driver to use...
	I0916 10:41:47.111969   84300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:41:47.112016   84300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:41:47.123527   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:41:47.134283   84300 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:41:47.134346   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:41:47.146305   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:41:47.157044   84300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:41:47.233851   84300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:41:47.305787   84300 docker.go:233] disabling docker service ...
	I0916 10:41:47.305843   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:41:47.317368   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:41:47.327695   84300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:41:47.401989   84300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:41:47.477948   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:41:47.488196   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:41:47.502791   84300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:41:47.502861   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:47.512290   84300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:41:47.512366   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:47.521636   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:47.530676   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:47.539846   84300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:41:47.548978   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:47.558145   84300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:47.566589   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:47.575131   84300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:41:47.582475   84300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:41:47.590145   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:47.661541   84300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:41:47.761148   84300 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:41:47.761206   84300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:41:47.764659   84300 start.go:563] Will wait 60s for crictl version
	I0916 10:41:47.764707   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:41:47.767682   84300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:41:47.803248   84300 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:41:47.803318   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:41:47.836825   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:41:47.873641   84300 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:41:47.875406   84300 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:41:47.892426   84300 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:41:47.895986   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:41:47.906578   84300 kubeadm.go:883] updating cluster {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:41:47.906742   84300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:41:47.906804   84300 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:47.946791   84300 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:41:47.946812   84300 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:41:47.946881   84300 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:47.978848   84300 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:41:47.978872   84300 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:41:47.978880   84300 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:41:47.978994   84300 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:41:47.979068   84300 ssh_runner.go:195] Run: crio config
	I0916 10:41:48.022083   84300 cni.go:84] Creating CNI manager for ""
	I0916 10:41:48.022103   84300 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:41:48.022113   84300 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:41:48.022133   84300 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-107957 NodeName:ha-107957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:41:48.022248   84300 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-107957"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:41:48.022265   84300 kube-vip.go:115] generating kube-vip config ...
	I0916 10:41:48.022301   84300 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:41:48.033856   84300 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:48.033970   84300 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:41:48.034021   84300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:41:48.041919   84300 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:41:48.041986   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:41:48.049548   84300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0916 10:41:48.065392   84300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:41:48.082133   84300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0916 10:41:48.098860   84300 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:41:48.116051   84300 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:41:48.119751   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:41:48.131286   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:48.212745   84300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:48.225016   84300 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.2
	I0916 10:41:48.225041   84300 certs.go:194] generating shared ca certs ...
	I0916 10:41:48.225055   84300 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:48.225191   84300 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:41:48.225229   84300 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:41:48.225240   84300 certs.go:256] generating profile certs ...
	I0916 10:41:48.225309   84300 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:41:48.225348   84300 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.ef1fe742
	I0916 10:41:48.225375   84300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.ef1fe742 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 10:41:48.387941   84300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.ef1fe742 ...
	I0916 10:41:48.387976   84300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.ef1fe742: {Name:mk2ae52f8f44f5ac4e0479d398bce584b85762dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:48.388156   84300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.ef1fe742 ...
	I0916 10:41:48.388169   84300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.ef1fe742: {Name:mked0b79b9c371128578590f9b0d1257f5df8c6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:48.388240   84300 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.ef1fe742 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:41:48.388385   84300 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.ef1fe742 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:41:48.388513   84300 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:41:48.388526   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:41:48.388539   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:41:48.388552   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:41:48.388565   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:41:48.388577   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:41:48.388588   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:41:48.388602   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:41:48.388614   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:41:48.388664   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:41:48.388691   84300 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:41:48.388701   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:41:48.388722   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:41:48.388742   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:41:48.388762   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:41:48.388797   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:41:48.388823   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:41:48.388837   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:41:48.388849   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:48.389390   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:41:48.413707   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:41:48.435549   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:41:48.457035   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:41:48.478550   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:41:48.499799   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:41:48.521406   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:41:48.543026   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:41:48.565712   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:41:48.588279   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:41:48.610843   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:41:48.633401   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:41:48.649690   84300 ssh_runner.go:195] Run: openssl version
	I0916 10:41:48.654792   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:41:48.663276   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:41:48.666473   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:41:48.666530   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:41:48.672781   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:41:48.681327   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:41:48.690039   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:41:48.693242   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:41:48.693290   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:41:48.699504   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:41:48.707863   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:41:48.717914   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:48.721546   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:48.721611   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:48.728185   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:41:48.737136   84300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:41:48.740452   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:41:48.747010   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:41:48.753310   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:41:48.759749   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:41:48.766114   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:41:48.772344   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:41:48.779039   84300 kubeadm.go:392] StartCluster: {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:48.779205   84300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:41:48.779271   84300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:41:48.811936   84300 cri.go:89] found id: ""
	I0916 10:41:48.812004   84300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:41:48.820707   84300 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:41:48.820729   84300 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:41:48.820774   84300 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:41:48.828753   84300 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:48.829119   84300 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-107957" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:41:48.829208   84300 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "ha-107957" cluster setting kubeconfig missing "ha-107957" context setting]
	I0916 10:41:48.829588   84300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:48.829964   84300 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:41:48.830165   84300 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:41:48.830562   84300 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:41:48.830704   84300 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:41:48.838876   84300 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:41:48.838899   84300 kubeadm.go:597] duration metric: took 18.16439ms to restartPrimaryControlPlane
	I0916 10:41:48.838909   84300 kubeadm.go:394] duration metric: took 59.880886ms to StartCluster
	I0916 10:41:48.838939   84300 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:48.839006   84300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:41:48.839707   84300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:48.839955   84300 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:41:48.839983   84300 start.go:241] waiting for startup goroutines ...
	I0916 10:41:48.839992   84300 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:41:48.840214   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:48.843695   84300 out.go:177] * Enabled addons: 
	I0916 10:41:48.845324   84300 addons.go:510] duration metric: took 5.328597ms for enable addons: enabled=[]
	I0916 10:41:48.845389   84300 start.go:246] waiting for cluster config update ...
	I0916 10:41:48.845397   84300 start.go:255] writing updated cluster config ...
	I0916 10:41:48.847103   84300 out.go:201] 
	I0916 10:41:48.848466   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:48.848560   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:41:48.850283   84300 out.go:177] * Starting "ha-107957-m02" control-plane node in "ha-107957" cluster
	I0916 10:41:48.851541   84300 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:41:48.853158   84300 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:41:48.854527   84300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:41:48.854546   84300 cache.go:56] Caching tarball of preloaded images
	I0916 10:41:48.854595   84300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:41:48.854626   84300 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:41:48.854641   84300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:41:48.854774   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:41:48.874609   84300 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:41:48.874628   84300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:41:48.874705   84300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:41:48.874724   84300 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:41:48.874732   84300 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:41:48.874742   84300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:41:48.874751   84300 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:41:48.875955   84300 image.go:273] response: 
	I0916 10:41:48.929465   84300 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:41:48.929515   84300 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:41:48.929550   84300 start.go:360] acquireMachinesLock for ha-107957-m02: {Name:mkbd1a70c826dc0de88173dfa3a4a79ea68a23fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:41:48.929631   84300 start.go:364] duration metric: took 59.975µs to acquireMachinesLock for "ha-107957-m02"
	I0916 10:41:48.929651   84300 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:41:48.929659   84300 fix.go:54] fixHost starting: m02
	I0916 10:41:48.929942   84300 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:41:48.949039   84300 fix.go:112] recreateIfNeeded on ha-107957-m02: state=Stopped err=<nil>
	W0916 10:41:48.949071   84300 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:41:48.951727   84300 out.go:177] * Restarting existing docker container for "ha-107957-m02" ...
	I0916 10:41:48.953317   84300 cli_runner.go:164] Run: docker start ha-107957-m02
	I0916 10:41:49.224166   84300 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:41:49.242654   84300 kic.go:430] container "ha-107957-m02" state is running.
	I0916 10:41:49.242995   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:41:49.260804   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:41:49.261082   84300 machine.go:93] provisionDockerMachine start ...
	I0916 10:41:49.261159   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:49.279218   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:49.279431   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0916 10:41:49.279446   84300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:41:49.280182   84300 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49086->127.0.0.1:32813: read: connection reset by peer
	I0916 10:41:52.412787   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:41:52.412817   84300 ubuntu.go:169] provisioning hostname "ha-107957-m02"
	I0916 10:41:52.412887   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:52.429730   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:52.429939   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0916 10:41:52.429957   84300 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m02 && echo "ha-107957-m02" | sudo tee /etc/hostname
	I0916 10:41:52.576097   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:41:52.576186   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:52.593515   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:52.593747   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0916 10:41:52.593774   84300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:41:52.725616   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:41:52.725651   84300 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:41:52.725672   84300 ubuntu.go:177] setting up certificates
	I0916 10:41:52.725683   84300 provision.go:84] configureAuth start
	I0916 10:41:52.725733   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:41:52.742599   84300 provision.go:143] copyHostCerts
	I0916 10:41:52.742642   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:41:52.742690   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:41:52.742702   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:41:52.742776   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:41:52.742869   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:41:52.742895   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:41:52.742904   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:41:52.742944   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:41:52.743010   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:41:52.743034   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:41:52.743043   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:41:52.743076   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:41:52.743144   84300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m02 san=[127.0.0.1 192.168.49.3 ha-107957-m02 localhost minikube]
	I0916 10:41:52.890899   84300 provision.go:177] copyRemoteCerts
	I0916 10:41:52.890963   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:41:52.890997   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:52.908956   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:41:53.006020   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:41:53.006099   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:41:53.029616   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:41:53.029707   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:41:53.051908   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:41:53.051991   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:41:53.073700   84300 provision.go:87] duration metric: took 347.998179ms to configureAuth
	I0916 10:41:53.073735   84300 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:41:53.073959   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:53.074075   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:53.090288   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:53.090477   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0916 10:41:53.090499   84300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:41:53.420476   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:41:53.420499   84300 machine.go:96] duration metric: took 4.159398959s to provisionDockerMachine
	I0916 10:41:53.420511   84300 start.go:293] postStartSetup for "ha-107957-m02" (driver="docker")
	I0916 10:41:53.420520   84300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:41:53.420569   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:41:53.420609   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:53.438065   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:41:53.533971   84300 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:41:53.537289   84300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:41:53.537357   84300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:41:53.537373   84300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:41:53.537384   84300 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:41:53.537393   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:41:53.537451   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:41:53.537548   84300 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:41:53.537560   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:41:53.537680   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:41:53.545585   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:41:53.568068   84300 start.go:296] duration metric: took 147.542592ms for postStartSetup
	I0916 10:41:53.568149   84300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:41:53.568190   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:53.585937   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:41:53.678573   84300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:41:53.682591   84300 fix.go:56] duration metric: took 4.752927813s for fixHost
	I0916 10:41:53.682612   84300 start.go:83] releasing machines lock for "ha-107957-m02", held for 4.752969514s
	I0916 10:41:53.682684   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:41:53.701753   84300 out.go:177] * Found network options:
	I0916 10:41:53.703169   84300 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:41:53.704552   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:41:53.704589   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:41:53.704666   84300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:41:53.704704   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:53.704748   84300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:41:53.704810   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:41:53.722157   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:41:53.722183   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:41:53.958711   84300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:41:53.964051   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:41:54.003825   84300 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:41:54.003924   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:41:54.016410   84300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:41:54.016444   84300 start.go:495] detecting cgroup driver to use...
	I0916 10:41:54.016483   84300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:41:54.016542   84300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:41:54.099893   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:41:54.113202   84300 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:41:54.113272   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:41:54.126619   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:41:54.207931   84300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:41:54.535026   84300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:41:54.809095   84300 docker.go:233] disabling docker service ...
	I0916 10:41:54.809190   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:41:54.826748   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:41:54.894059   84300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:41:55.215411   84300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:41:55.431948   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:41:55.444134   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:41:55.499335   84300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:41:55.499401   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:55.516834   84300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:41:55.516959   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:55.531541   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:55.600429   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:55.617803   84300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:41:55.628385   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:55.697856   84300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:55.709238   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:41:55.725175   84300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:41:55.796182   84300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:41:55.807325   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:56.113185   84300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:41:57.677807   84300 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.564587415s)
	I0916 10:41:57.677835   84300 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:41:57.677886   84300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:41:57.681466   84300 start.go:563] Will wait 60s for crictl version
	I0916 10:41:57.681540   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:41:57.684923   84300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:41:57.716437   84300 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:41:57.716522   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:41:57.752325   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:41:57.786533   84300 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:41:57.788265   84300 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:41:57.790043   84300 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:41:57.808264   84300 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:41:57.811895   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:41:57.822792   84300 mustload.go:65] Loading cluster: ha-107957
	I0916 10:41:57.823010   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:57.823213   84300 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:41:57.840981   84300 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:41:57.841227   84300 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.3
	I0916 10:41:57.841240   84300 certs.go:194] generating shared ca certs ...
	I0916 10:41:57.841259   84300 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:57.841427   84300 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:41:57.841480   84300 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:41:57.841493   84300 certs.go:256] generating profile certs ...
	I0916 10:41:57.841618   84300 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:41:57.841710   84300 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.5622c22a
	I0916 10:41:57.841765   84300 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:41:57.841778   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:41:57.841798   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:41:57.841818   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:41:57.841836   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:41:57.841854   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:41:57.841877   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:41:57.841896   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:41:57.841914   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:41:57.841981   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:41:57.842023   84300 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:41:57.842038   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:41:57.842073   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:41:57.842106   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:41:57.842136   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:41:57.842190   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:41:57.842227   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:57.842248   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:41:57.842266   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:41:57.842328   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:41:57.859350   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:41:57.949727   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:41:57.953857   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:41:57.966772   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:41:57.970789   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:41:57.983980   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:41:57.987309   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:41:58.001712   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:41:58.005354   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:41:58.017265   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:41:58.020317   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:41:58.031348   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:41:58.034508   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:41:58.045966   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:41:58.068335   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:41:58.090739   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:41:58.114062   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:41:58.137380   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:41:58.161129   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:41:58.183699   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:41:58.208875   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:41:58.234920   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:41:58.261588   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:41:58.285204   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:41:58.308320   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:41:58.325931   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:41:58.343617   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:41:58.361074   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:41:58.377531   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:41:58.394433   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:41:58.411588   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:41:58.427512   84300 ssh_runner.go:195] Run: openssl version
	I0916 10:41:58.432570   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:41:58.441751   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:41:58.445038   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:41:58.445142   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:41:58.452058   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:41:58.460719   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:41:58.470126   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:58.473855   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:58.473904   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:58.480402   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:41:58.488564   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:41:58.497104   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:41:58.500503   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:41:58.500550   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:41:58.506956   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:41:58.515171   84300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:41:58.519044   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:41:58.525148   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:41:58.530977   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:41:58.536862   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:41:58.542742   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:41:58.548848   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:41:58.555331   84300 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0916 10:41:58.555446   84300 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:41:58.555476   84300 kube-vip.go:115] generating kube-vip config ...
	I0916 10:41:58.555523   84300 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:41:58.567283   84300 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:58.567348   84300 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:41:58.567405   84300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:41:58.575589   84300 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:41:58.575664   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:41:58.583525   84300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:41:58.600318   84300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:41:58.617241   84300 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:41:58.633462   84300 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:41:58.636923   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:41:58.647010   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:58.749797   84300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:58.761446   84300 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:41:58.761772   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:58.763868   84300 out.go:177] * Verifying Kubernetes components...
	I0916 10:41:58.765008   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:58.858005   84300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:58.869532   84300 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:41:58.869792   84300 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:41:58.869851   84300 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:41:58.870089   84300 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:41:58.870180   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:41:58.870193   84300 round_trippers.go:469] Request Headers:
	I0916 10:41:58.870204   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:58.870210   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:10.396081   84300 round_trippers.go:574] Response Status: 500 Internal Server Error in 11525 milliseconds
	I0916 10:42:10.396429   84300 node_ready.go:53] error getting node "ha-107957-m02": etcdserver: request timed out
	I0916 10:42:10.396519   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:10.396527   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:10.396538   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:10.396545   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.204898   84300 round_trippers.go:574] Response Status: 200 OK in 3808 milliseconds
	I0916 10:42:14.206515   84300 node_ready.go:49] node "ha-107957-m02" has status "Ready":"True"
	I0916 10:42:14.206589   84300 node_ready.go:38] duration metric: took 15.336480344s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:42:14.206616   84300 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:42:14.206711   84300 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:42:14.206752   84300 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:42:14.206860   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:42:14.206895   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.206916   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.206934   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.216422   84300 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 10:42:14.225842   84300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.225954   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:42:14.225967   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.225975   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.225982   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.228314   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:14.229027   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:14.229045   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.229056   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.229061   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.231078   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:14.231552   84300 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:14.231569   84300 pod_ready.go:82] duration metric: took 5.69193ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.231578   84300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.231629   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:42:14.231636   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.231643   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.231649   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.233726   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:14.234259   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:14.234276   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.234283   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.234286   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.236219   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:42:14.236637   84300 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:14.236656   84300 pod_ready.go:82] duration metric: took 5.071097ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.236668   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.236727   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:42:14.236736   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.236746   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.236753   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.238919   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:14.239420   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:14.239437   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.239444   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.239447   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.241415   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:42:14.241868   84300 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:14.241887   84300 pod_ready.go:82] duration metric: took 5.212105ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.241900   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.241956   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:42:14.241967   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.241977   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.241984   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.243760   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:42:14.244225   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:14.244238   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.244248   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.244253   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.246055   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:42:14.246412   84300 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:14.246430   84300 pod_ready.go:82] duration metric: took 4.521345ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.246441   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.246493   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:42:14.246502   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.246512   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.246518   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.248193   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:42:14.406860   84300 request.go:632] Waited for 158.224137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:42:14.406912   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:42:14.406917   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.406924   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.406928   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.409066   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:14.409720   84300 pod_ready.go:93] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:14.409752   84300 pod_ready.go:82] duration metric: took 163.302206ms for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.409784   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.607202   84300 request.go:632] Waited for 197.339931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:42:14.607261   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:42:14.607266   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.607273   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.607277   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.609752   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:14.807704   84300 request.go:632] Waited for 197.392262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:14.807759   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:14.807766   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:14.807773   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:14.807776   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:14.810416   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:14.811144   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:14.811169   84300 pod_ready.go:82] duration metric: took 401.369126ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:14.811183   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:15.007843   84300 request.go:632] Waited for 196.574325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:15.007918   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:15.007924   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:15.007931   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:15.007936   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:15.011080   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:15.206913   84300 request.go:632] Waited for 195.118699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:15.206971   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:15.206977   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:15.206985   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:15.206992   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:15.209034   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:15.407604   84300 request.go:632] Waited for 95.251081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:15.407681   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:15.407693   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:15.407704   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:15.407717   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:15.410164   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:15.607296   84300 request.go:632] Waited for 196.381162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:15.607370   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:15.607376   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:15.607387   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:15.607393   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:15.609653   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:15.812300   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:15.812326   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:15.812339   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:15.812345   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:15.815155   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:16.007829   84300 request.go:632] Waited for 191.366097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:16.007977   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:16.007989   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:16.007999   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:16.008022   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:16.010984   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:16.311437   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:16.311510   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:16.311533   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:16.311549   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:16.314423   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:16.407402   84300 request.go:632] Waited for 92.19673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:16.407484   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:16.407496   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:16.407507   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:16.407512   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:16.409888   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:16.811617   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:16.811638   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:16.811649   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:16.811654   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:16.814351   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:16.815015   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:16.815032   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:16.815042   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:16.815048   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:16.817100   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:16.817641   84300 pod_ready.go:103] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:17.311845   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:17.311867   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:17.311876   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:17.311881   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:17.314765   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:17.315588   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:17.315608   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:17.315620   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:17.315624   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:17.317976   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:17.811802   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:17.811821   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:17.811829   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:17.811833   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:17.814499   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:17.815158   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:17.815173   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:17.815181   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:17.815184   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:17.819536   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:42:18.311907   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:18.311931   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:18.311939   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:18.311944   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:18.314536   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:18.315228   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:18.315241   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:18.315248   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:18.315251   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:18.317303   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:18.812189   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:18.812212   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:18.812221   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:18.812226   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:18.814905   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:18.815471   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:18.815486   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:18.815492   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:18.815498   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:18.817531   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:18.817995   84300 pod_ready.go:103] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:19.312231   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:19.312254   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:19.312265   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:19.312270   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:19.314498   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:19.315325   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:19.315345   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:19.315355   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:19.315361   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:19.317499   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:19.812298   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:19.812320   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:19.812329   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:19.812334   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:19.815189   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:19.815860   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:19.815884   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:19.815893   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:19.815897   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:19.818314   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:20.312070   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:20.312092   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:20.312099   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:20.312103   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:20.315010   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:20.315627   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:20.315643   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:20.315650   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:20.315654   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:20.317909   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:20.811698   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:20.811727   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:20.811738   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:20.811745   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:20.814347   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:20.814961   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:20.814978   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:20.814987   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:20.814995   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:20.817234   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:21.312103   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:21.312124   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:21.312132   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:21.312136   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:21.315085   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:21.315712   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:21.315730   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:21.315737   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:21.315741   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:21.318149   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:21.318704   84300 pod_ready.go:103] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:21.812258   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:21.812278   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:21.812285   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:21.812290   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:21.815122   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:21.815898   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:21.815916   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:21.815926   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:21.815932   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:21.818018   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:22.311815   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:22.311842   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:22.311850   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:22.311855   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:22.314387   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:22.314994   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:22.315011   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:22.315021   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:22.315026   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:22.317134   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:22.811989   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:22.812008   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:22.812015   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:22.812024   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:22.814832   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:22.815434   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:22.815451   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:22.815461   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:22.815464   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:22.817600   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:23.311374   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:23.311394   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:23.311402   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:23.311405   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:23.314004   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:23.314717   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:23.314733   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:23.314741   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:23.314745   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:23.317042   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:23.811957   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:23.811976   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:23.811984   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:23.811988   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:23.814879   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:23.815535   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:23.815550   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:23.815559   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:23.815568   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:23.817776   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:23.818330   84300 pod_ready.go:103] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:24.311593   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:24.311614   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:24.311624   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:24.311628   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:24.314320   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:24.314871   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:24.314886   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:24.314893   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:24.314896   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:24.317173   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:24.812123   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:24.812147   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:24.812157   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:24.812163   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:24.814729   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:24.815331   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:24.815347   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:24.815354   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:24.815357   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:24.817460   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:25.311427   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:25.311448   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:25.311455   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:25.311460   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:25.314305   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:25.314967   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:25.314983   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:25.314990   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:25.314995   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:25.317175   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:25.812058   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:25.812083   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:25.812094   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:25.812100   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:25.814761   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:25.815355   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:25.815374   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:25.815381   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:25.815384   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:25.817700   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:26.312220   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:26.312242   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:26.312250   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:26.312253   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:26.315012   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:26.315597   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:26.315609   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:26.315616   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:26.315620   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:26.318129   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:26.318541   84300 pod_ready.go:103] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:26.812026   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:26.812046   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:26.812054   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:26.812057   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:26.815102   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:26.815726   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:26.815744   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:26.815751   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:26.815755   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:26.817866   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:27.311619   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:27.311638   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:27.311646   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:27.311650   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:27.314020   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:27.314678   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:27.314694   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:27.314702   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:27.314705   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:27.316694   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:42:27.811402   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:27.811424   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:27.811434   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:27.811439   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:27.814252   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:27.814849   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:27.814864   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:27.814871   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:27.814875   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:27.817145   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:28.311986   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:28.312005   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:28.312013   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:28.312016   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:28.314599   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:28.315210   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:28.315223   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:28.315230   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:28.315233   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:28.317251   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:28.812138   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:28.812161   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:28.812169   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:28.812173   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:28.814900   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:28.815490   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:28.815503   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:28.815508   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:28.815511   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:28.817638   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:28.818128   84300 pod_ready.go:103] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:29.311730   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:29.311751   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:29.311759   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:29.311762   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:29.314482   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:29.315292   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:29.315313   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:29.315322   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:29.315327   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:29.317638   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:29.811421   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:29.811441   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:29.811449   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:29.811459   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:29.814371   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:29.814976   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:29.814995   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:29.815002   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:29.815006   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:29.817412   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:30.312239   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:30.312258   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:30.312268   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:30.312274   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:30.314860   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:30.315614   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:30.315632   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:30.315642   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:30.315649   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:30.317741   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:30.812171   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:30.812193   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:30.812200   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:30.812205   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:30.814363   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:30.815083   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:30.815101   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:30.815113   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:30.815119   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:30.817189   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:31.312068   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:42:31.312093   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.312104   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.312108   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.314883   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:31.315452   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:42:31.315467   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.315475   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.315480   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.317716   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:31.318317   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:31.318345   84300 pod_ready.go:82] duration metric: took 16.507153585s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:31.318360   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:31.318464   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:42:31.318478   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.318487   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.318492   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.320662   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:31.321181   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:42:31.321196   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.321205   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.321209   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.323290   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:31.323768   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:31.323785   84300 pod_ready.go:82] duration metric: took 5.414166ms for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:31.323795   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:31.323853   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:31.323860   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.323867   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.323870   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.325920   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:31.326479   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:31.326495   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.326501   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.326503   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.328407   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:42:31.824729   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:31.824757   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.824823   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.824855   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.827705   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:31.828371   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:31.828388   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:31.828400   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:31.828406   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:31.830649   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:32.324117   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:32.324137   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:32.324144   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:32.324148   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:32.328780   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:42:32.329635   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:32.329656   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:32.329673   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:32.329678   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:32.333738   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:42:32.824771   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:32.824793   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:32.824805   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:32.824810   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:32.827751   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:32.828464   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:32.828479   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:32.828487   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:32.828494   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:32.830996   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:33.324830   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:33.324850   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:33.324858   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:33.324863   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:33.327646   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:33.328265   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:33.328283   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:33.328291   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:33.328296   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:33.330611   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:33.331085   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:33.823977   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:33.823999   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:33.824009   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:33.824016   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:33.826766   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:33.827414   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:33.827432   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:33.827439   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:33.827444   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:33.829824   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:34.324415   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:34.324434   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:34.324442   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:34.324447   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:34.327175   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:34.327763   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:34.327779   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:34.327786   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:34.327789   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:34.330011   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:34.824871   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:34.824894   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:34.824902   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:34.824907   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:34.827978   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:34.828582   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:34.828599   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:34.828607   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:34.828614   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:34.830915   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:35.324865   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:35.324888   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:35.324898   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:35.324902   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:35.327824   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:35.328514   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:35.328535   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:35.328545   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:35.328552   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:35.330917   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:35.331362   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:35.824847   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:35.824870   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:35.824878   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:35.824883   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:35.829767   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:42:35.830471   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:35.830489   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:35.830497   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:35.830502   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:35.832881   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:36.324712   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:36.324734   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:36.324741   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:36.324745   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:36.327804   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:36.328430   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:36.328448   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:36.328456   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:36.328461   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:36.330755   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:36.824783   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:36.824803   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:36.824811   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:36.824816   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:36.827800   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:36.828405   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:36.828420   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:36.828428   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:36.828432   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:36.830788   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:37.324676   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:37.324696   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:37.324704   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:37.324708   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:37.327469   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:37.328030   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:37.328043   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:37.328049   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:37.328053   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:37.330314   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:37.824069   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:37.824092   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:37.824099   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:37.824103   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:37.827052   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:37.827732   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:37.827748   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:37.827756   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:37.827760   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:37.830097   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:37.830559   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:38.325014   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:38.325038   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:38.325046   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:38.325050   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:38.328034   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:38.329005   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:38.329028   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:38.329039   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:38.329051   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:38.331449   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:38.824244   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:38.824268   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:38.824276   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:38.824281   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:38.827160   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:38.827824   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:38.827844   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:38.827852   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:38.827857   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:38.830304   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:39.324119   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:39.324142   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:39.324150   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:39.324155   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:39.326923   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:39.327749   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:39.327766   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:39.327774   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:39.327779   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:39.329942   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:39.824939   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:39.824974   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:39.824982   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:39.824986   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:39.827988   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:39.828619   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:39.828635   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:39.828642   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:39.828646   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:39.831138   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:39.831587   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:40.324978   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:40.324999   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:40.325008   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:40.325013   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:40.327606   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:40.328193   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:40.328206   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:40.328213   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:40.328218   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:40.330303   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:40.824084   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:40.824107   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:40.824115   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:40.824119   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:40.826829   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:40.827466   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:40.827482   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:40.827489   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:40.827493   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:40.829649   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:41.324458   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:41.324479   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:41.324486   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:41.324495   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:41.327421   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:41.328075   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:41.328092   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:41.328102   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:41.328107   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:41.330491   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:41.824852   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:41.824877   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:41.824885   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:41.824890   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:41.827778   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:41.828621   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:41.828642   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:41.828654   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:41.828666   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:41.831070   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:42.324949   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:42.324971   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:42.324979   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:42.324983   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:42.327624   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:42.328279   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:42.328296   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:42.328304   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:42.328308   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:42.330442   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:42.330901   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:42.824246   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:42.824270   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:42.824278   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:42.824282   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:42.827303   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:42.828133   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:42.828152   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:42.828159   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:42.828163   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:42.831221   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:43.323984   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:43.324009   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:43.324016   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:43.324019   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:43.326760   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:43.327409   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:43.327427   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:43.327434   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:43.327437   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:43.329669   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:43.823937   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:43.823961   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:43.823969   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:43.823973   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:43.827036   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:43.827919   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:43.827940   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:43.827950   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:43.827956   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:43.830157   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:44.324978   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:44.324996   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:44.325008   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:44.325013   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:44.327674   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:44.328263   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:44.328280   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:44.328287   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:44.328290   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:44.330603   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:44.331031   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:44.824574   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:44.824593   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:44.824601   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:44.824604   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:44.827259   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:44.827926   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:44.827943   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:44.827953   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:44.827959   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:44.830129   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:45.324052   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:45.324074   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:45.324084   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:45.324091   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:45.327701   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:45.328520   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:45.328542   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:45.328551   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:45.328562   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:45.330934   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:45.824806   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:45.824832   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:45.824841   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:45.824849   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:45.827680   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:45.828343   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:45.828359   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:45.828367   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:45.828370   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:45.830471   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:46.324036   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:46.324056   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:46.324063   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:46.324073   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:46.326547   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:46.327243   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:46.327257   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:46.327264   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:46.327268   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:46.329556   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:46.824553   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:46.824576   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:46.824586   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:46.824592   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:46.827261   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:46.827888   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:46.827904   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:46.827911   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:46.827915   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:46.830047   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:46.830438   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:47.324884   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:47.324921   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:47.324930   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:47.324937   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:47.327223   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:47.327846   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:47.327863   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:47.327869   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:47.327873   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:47.330070   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:47.824958   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:47.824983   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:47.824991   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:47.824995   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:47.827798   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:47.828420   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:47.828437   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:47.828444   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:47.828448   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:47.830926   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:48.324655   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:48.324682   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:48.324691   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:48.324694   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:48.327311   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:48.328090   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:48.328107   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:48.328117   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:48.328123   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:48.330286   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:48.824683   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:48.824704   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:48.824712   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:48.824716   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:48.827672   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:48.828250   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:48.828263   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:48.828270   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:48.828274   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:48.843476   84300 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0916 10:42:48.843963   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:49.324222   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:49.324242   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:49.324250   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:49.324254   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:49.326916   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:49.327542   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:49.327561   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:49.327571   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:49.327584   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:49.329790   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:49.824728   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:49.824747   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:49.824755   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:49.824759   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:49.827415   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:49.828033   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:49.828051   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:49.828058   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:49.828062   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:49.830265   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:50.324039   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:50.324059   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:50.324066   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:50.324071   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:50.326788   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:50.327437   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:50.327456   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:50.327465   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:50.327472   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:50.329587   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:50.824412   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:50.824436   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:50.824443   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:50.824446   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:50.827345   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:50.827980   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:50.827996   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:50.828003   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:50.828007   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:50.830164   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:51.324963   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:51.324989   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:51.325000   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:51.325007   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:51.327974   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:51.328613   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:51.328629   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:51.328639   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:51.328650   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:51.330981   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:51.331553   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:51.824186   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:51.824213   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:51.824224   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:51.824230   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:51.827356   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:51.828004   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:51.828020   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:51.828029   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:51.828037   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:51.830303   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:52.324101   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:52.324129   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:52.324141   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:52.324146   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:52.326945   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:52.327562   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:52.327579   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:52.327587   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:52.327591   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:52.329753   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:52.824668   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:52.824692   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:52.824700   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:52.824706   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:52.827734   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:52.828453   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:52.828469   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:52.828477   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:52.828481   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:52.830559   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:53.324110   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:53.324133   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:53.324142   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:53.324147   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:53.327063   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:53.327708   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:53.327724   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:53.327731   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:53.327736   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:53.330307   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:53.824182   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:53.824221   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:53.824231   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:53.824235   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:53.827500   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:53.828168   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:53.828187   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:53.828194   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:53.828198   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:53.830721   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:53.831199   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:54.324567   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:54.324588   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:54.324598   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:54.324618   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:54.327553   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:54.328191   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:54.328208   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:54.328218   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:54.328223   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:54.330490   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:54.824294   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:54.824315   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:54.824323   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:54.824327   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:54.827014   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:54.827668   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:54.827684   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:54.827694   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:54.827703   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:54.829864   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:55.324701   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:55.324721   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:55.324729   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:55.324732   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:55.327431   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:55.328034   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:55.328051   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:55.328062   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:55.328066   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:55.330151   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:55.825022   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:55.825045   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:55.825056   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:55.825061   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:55.828084   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:55.828766   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:55.828783   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:55.828791   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:55.828795   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:55.830971   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:55.831445   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:56.324875   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:56.324893   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:56.324901   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:56.324905   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:56.327622   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:56.328206   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:56.328222   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:56.328232   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:56.328240   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:56.330513   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:56.824532   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:56.824552   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:56.824559   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:56.824564   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:56.827264   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:56.827890   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:56.827905   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:56.827912   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:56.827916   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:56.830052   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:57.324886   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:57.324906   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:57.324914   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:57.324918   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:57.382273   84300 round_trippers.go:574] Response Status:  in 57 milliseconds
	I0916 10:42:58.382959   84300 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:58.383010   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:58.383016   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:58.383024   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:58.383027   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:58.964496   84300 round_trippers.go:574] Response Status: 200 OK in 581 milliseconds
	I0916 10:42:58.965257   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:58.965273   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:58.965280   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:58.965285   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:58.999170   84300 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0916 10:42:59.007893   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:42:59.007987   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:59.007997   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:59.008008   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:59.008015   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:59.120222   84300 round_trippers.go:574] Response Status: 200 OK in 112 milliseconds
	I0916 10:42:59.121952   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:59.122049   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:59.122077   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:59.122095   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:59.216494   84300 round_trippers.go:574] Response Status: 200 OK in 94 milliseconds
	I0916 10:42:59.323987   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:59.324007   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:59.324015   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:59.324020   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:59.327103   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:59.327937   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:59.327954   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:59.327962   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:59.327966   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:59.331869   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:42:59.824322   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:42:59.824343   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:59.824351   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:59.824356   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:59.827285   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:42:59.828199   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:42:59.828219   84300 round_trippers.go:469] Request Headers:
	I0916 10:42:59.828227   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:42:59.828234   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:42:59.830740   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:00.324653   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:00.324678   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:00.324689   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:00.324698   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:00.327400   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:00.328060   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:00.328075   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:00.328082   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:00.328088   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:00.330082   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:43:00.823975   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:00.823999   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:00.824007   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:00.824011   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:00.826939   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:00.827566   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:00.827582   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:00.827590   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:00.827594   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:00.829762   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:01.324746   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:01.324767   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:01.324778   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:01.324784   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:01.327645   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:01.328417   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:01.328432   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:01.328440   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:01.328444   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:01.330569   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:01.331163   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:01.824900   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:01.824925   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:01.824939   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:01.824944   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:01.828094   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:01.828719   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:01.828736   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:01.828747   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:01.828750   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:01.830996   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:02.324961   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:02.324983   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:02.324990   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:02.324994   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:02.327810   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:02.328513   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:02.328528   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:02.328536   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:02.328542   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:02.330824   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:02.824768   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:02.824795   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:02.824803   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:02.824807   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:02.827981   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:02.828663   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:02.828680   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:02.828687   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:02.828691   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:02.830953   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:03.324928   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:03.324948   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:03.324956   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:03.324962   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:03.327701   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:03.328350   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:03.328368   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:03.328375   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:03.328379   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:03.330695   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:03.331195   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:03.824492   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:03.824514   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:03.824522   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:03.824527   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:03.827468   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:03.828252   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:03.828268   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:03.828276   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:03.828282   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:03.830374   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:04.324137   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:04.324158   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:04.324166   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:04.324169   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:04.326989   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:04.327885   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:04.327908   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:04.327920   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:04.327928   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:04.330343   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:04.824267   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:04.824297   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:04.824308   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:04.824313   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:04.827351   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:04.827985   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:04.828004   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:04.828014   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:04.828021   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:04.830492   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:05.324333   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:05.324357   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:05.324366   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:05.324371   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:05.327119   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:05.327695   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:05.327711   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:05.327721   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:05.327727   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:05.329958   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:05.824842   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:05.824868   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:05.824876   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:05.824879   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:05.827893   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:05.828545   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:05.828563   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:05.828573   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:05.828579   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:05.831056   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:05.831541   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:06.324980   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:06.325000   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:06.325011   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:06.325014   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:06.327679   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:06.328327   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:06.328344   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:06.328354   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:06.328359   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:06.330614   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:06.824325   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:06.824345   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:06.824353   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:06.824357   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:06.827431   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:06.828072   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:06.828087   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:06.828094   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:06.828098   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:06.830409   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:07.324026   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:07.324047   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:07.324055   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:07.324061   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:07.326788   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:07.327445   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:07.327460   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:07.327470   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:07.327478   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:07.329568   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:07.824427   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:07.824451   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:07.824459   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:07.824463   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:07.827557   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:07.828240   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:07.828256   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:07.828263   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:07.828266   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:07.830531   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:08.324293   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:08.324312   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:08.324325   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:08.324329   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:08.328619   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:43:08.329600   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:08.329620   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:08.329631   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:08.329637   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:08.332004   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:08.332531   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:08.824888   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:08.824909   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:08.824917   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:08.824922   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:08.827871   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:08.828527   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:08.828542   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:08.828549   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:08.828554   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:08.830966   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:09.324934   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:09.324956   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:09.324971   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:09.324978   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:09.327803   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:09.328480   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:09.328495   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:09.328502   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:09.328506   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:09.330526   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:09.824301   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:09.824329   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:09.824337   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:09.824340   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:09.827586   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:09.828309   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:09.828327   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:09.828335   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:09.828340   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:09.831034   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:10.324970   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:10.324990   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:10.324998   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:10.325004   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:10.327936   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:10.328598   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:10.328615   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:10.328622   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:10.328625   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:10.330839   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:10.824667   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:10.824692   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:10.824701   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:10.824708   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:10.827642   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:10.828327   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:10.828343   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:10.828353   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:10.828358   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:10.830665   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:10.831165   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:11.324467   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:11.324488   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:11.324496   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:11.324501   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:11.327097   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:11.327849   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:11.327868   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:11.327879   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:11.327887   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:11.329949   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:11.824314   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:11.824335   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:11.824342   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:11.824347   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:11.827445   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:11.828047   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:11.828063   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:11.828070   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:11.828075   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:11.830391   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:12.324039   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:12.324064   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:12.324072   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:12.324076   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:12.326934   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:12.327617   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:12.327635   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:12.327642   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:12.327647   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:12.329801   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:12.824965   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:12.824987   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:12.824994   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:12.824997   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:12.827882   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:12.828490   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:12.828505   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:12.828513   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:12.828517   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:12.830891   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:12.831339   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:13.324754   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:13.324773   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:13.324780   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:13.324784   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:13.327636   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:13.328429   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:13.328449   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:13.328456   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:13.328461   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:13.330723   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:13.824648   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:13.824668   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:13.824674   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:13.824678   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:13.827627   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:13.828316   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:13.828332   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:13.828338   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:13.828341   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:13.830649   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:14.324243   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:14.324263   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:14.324272   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:14.324276   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:14.327029   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:14.327709   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:14.327725   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:14.327733   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:14.327738   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:14.330030   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:14.824953   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:14.824975   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:14.824982   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:14.824986   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:14.827980   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:14.828630   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:14.828648   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:14.828658   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:14.828662   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:14.831135   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:14.831586   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:15.323958   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:15.323979   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:15.323990   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:15.323996   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:15.326525   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:15.327179   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:15.327193   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:15.327203   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:15.327210   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:15.329323   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:15.824199   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:15.824221   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:15.824228   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:15.824232   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:15.827145   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:15.827800   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:15.827816   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:15.827823   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:15.827832   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:15.830186   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:16.323985   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:16.324006   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:16.324015   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:16.324019   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:16.326846   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:16.327539   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:16.327557   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:16.327564   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:16.327570   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:16.329790   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:16.824397   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:16.824422   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:16.824432   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:16.824438   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:16.827546   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:16.828235   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:16.828255   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:16.828264   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:16.828270   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:16.830681   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:17.324153   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:17.324179   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:17.324190   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:17.324196   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:17.326943   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:17.327754   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:17.327772   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:17.327784   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:17.327791   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:17.330114   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:17.330559   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:17.824970   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:17.824995   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:17.825006   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:17.825012   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:17.827918   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:17.828502   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:17.828520   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:17.828531   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:17.828538   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:17.830613   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:18.324068   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:18.324092   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:18.324103   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:18.324109   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:18.327260   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:18.328017   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:18.328033   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:18.328041   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:18.328044   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:18.330259   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:18.824695   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:18.824725   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:18.824738   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:18.824745   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:18.828445   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:18.829414   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:18.829435   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:18.829446   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:18.829451   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:18.832145   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:19.323992   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:19.324021   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:19.324036   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:19.324041   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:19.327327   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:19.328052   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:19.328069   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:19.328078   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:19.328081   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:19.330583   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:19.331147   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:19.824587   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:19.824611   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:19.824618   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:19.824622   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:19.827741   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:19.828485   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:19.828508   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:19.828519   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:19.828525   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:19.831114   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:20.323954   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:20.323977   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:20.323987   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:20.323992   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:20.326209   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:20.326839   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:20.326860   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:20.326878   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:20.326888   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:20.329118   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:20.824973   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:20.824994   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:20.825003   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:20.825006   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:20.827913   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:20.828530   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:20.828546   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:20.828552   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:20.828556   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:20.830834   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:21.324672   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:21.324692   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:21.324700   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:21.324703   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:21.327621   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:21.328483   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:21.328505   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:21.328516   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:21.328523   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:21.330927   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:21.331341   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:21.824724   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:21.824745   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:21.824753   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:21.824757   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:21.827593   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:21.828209   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:21.828224   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:21.828232   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:21.828236   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:21.830792   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:22.324635   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:22.324654   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:22.324662   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:22.324666   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:22.327297   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:22.327989   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:22.328006   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:22.328012   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:22.328016   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:22.330085   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:22.824990   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:22.825014   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:22.825022   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:22.825026   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:22.828021   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:22.828732   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:22.828750   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:22.828760   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:22.828770   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:22.831443   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:23.324233   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:23.324268   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:23.324276   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:23.324280   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:23.326978   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:23.327673   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:23.327688   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:23.327695   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:23.327699   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:23.329920   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:23.824760   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:23.824782   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:23.824790   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:23.824794   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:23.827800   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:23.828465   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:23.828482   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:23.828490   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:23.828495   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:23.830761   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:23.831275   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:24.324551   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:24.324573   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:24.324581   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:24.324584   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:24.327406   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:24.328228   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:24.328248   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:24.328259   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:24.328263   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:24.330537   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:24.824293   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:24.824314   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:24.824322   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:24.824325   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:24.827085   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:24.827748   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:24.827765   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:24.827772   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:24.827775   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:24.830056   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:25.324953   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:25.324973   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:25.324982   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:25.324985   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:25.327909   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:25.328696   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:25.328713   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:25.328724   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:25.328731   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:25.330982   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:25.824866   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:25.824886   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:25.824894   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:25.824900   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:25.827765   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:25.828442   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:25.828460   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:25.828471   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:25.828477   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:25.830935   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:25.831395   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:26.324723   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:26.324742   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:26.324750   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:26.324759   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:26.327667   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:26.328430   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:26.328448   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:26.328460   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:26.328466   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:26.330799   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:26.824929   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:26.824948   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:26.824955   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:26.824959   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:26.827553   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:26.828214   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:26.828231   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:26.828238   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:26.828243   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:26.830522   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:27.324318   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:27.324338   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:27.324345   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:27.324349   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:27.327002   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:27.327626   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:27.327646   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:27.327653   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:27.327657   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:27.329899   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:27.824917   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:27.824937   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:27.824945   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:27.824949   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:27.827656   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:27.828269   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:27.828288   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:27.828297   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:27.828303   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:27.830496   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:28.324322   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:28.324347   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:28.324360   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:28.324365   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:28.327595   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:28.328205   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:28.328225   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:28.328235   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:28.328240   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:28.330652   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:28.331056   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:28.824376   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:28.824394   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:28.824401   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:28.824405   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:28.827067   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:28.827741   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:28.827759   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:28.827769   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:28.827776   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:28.829875   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:29.324933   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:29.324954   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:29.324961   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:29.324967   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:29.327565   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:29.328227   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:29.328243   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:29.328251   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:29.328256   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:29.330585   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:29.824330   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:29.824349   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:29.824357   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:29.824361   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:29.827174   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:29.827919   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:29.827939   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:29.827951   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:29.827957   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:29.830110   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:30.324981   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:30.325000   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:30.325017   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:30.325022   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:30.327655   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:30.328328   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:30.328346   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:30.328355   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:30.328360   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:30.330430   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:30.824032   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:30.824053   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:30.824061   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:30.824064   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:30.826891   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:30.827735   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:30.827753   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:30.827763   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:30.827768   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:30.830033   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:30.830516   84300 pod_ready.go:103] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"False"
	I0916 10:43:31.324923   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:31.324946   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.324956   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.324963   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.327701   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.328338   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:31.328352   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.328359   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.328363   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.330366   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:43:31.824618   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:43:31.824639   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.824650   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.824659   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.827258   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.827889   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:31.827903   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.827910   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.827917   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.830190   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.830618   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:31.830636   84300 pod_ready.go:82] duration metric: took 1m0.506834839s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:31.830650   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:31.830707   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:43:31.830715   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.830721   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.830726   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.833145   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.833778   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:43:31.833797   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.833806   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.833811   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.836061   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.836495   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:31.836513   84300 pod_ready.go:82] duration metric: took 5.856933ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:31.836523   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:31.836606   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:43:31.836616   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.836623   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.836627   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.838859   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.839533   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:31.839547   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.839554   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.839559   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.841818   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.842288   84300 pod_ready.go:98] node "ha-107957-m03" hosting pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m03" has status "Ready":"Unknown"
	I0916 10:43:31.842315   84300 pod_ready.go:82] duration metric: took 5.784292ms for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:43:31.842327   84300 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m03" has status "Ready":"Unknown"
	I0916 10:43:31.842338   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:31.842414   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:43:31.842424   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.842433   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.842440   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.844763   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.845370   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:31.845387   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.845398   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.845405   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.847362   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:43:31.847768   84300 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:31.847785   84300 pod_ready.go:82] duration metric: took 5.43393ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:31.847794   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:31.847842   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:43:31.847850   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.847856   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.847860   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.849978   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.850487   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:31.850511   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:31.850521   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:31.850526   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:31.852635   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:31.853112   84300 pod_ready.go:98] node "ha-107957-m03" hosting pod "kube-proxy-f2scr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m03" has status "Ready":"Unknown"
	I0916 10:43:31.853131   84300 pod_ready.go:82] duration metric: took 5.331762ms for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	E0916 10:43:31.853141   84300 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "kube-proxy-f2scr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m03" has status "Ready":"Unknown"
	I0916 10:43:31.853150   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:32.025588   84300 request.go:632] Waited for 172.338987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:43:32.025650   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:43:32.025659   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:32.025673   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:32.025687   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:32.028374   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:32.225472   84300 request.go:632] Waited for 196.351167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:43:32.225546   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:43:32.225552   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:32.225559   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:32.225563   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:32.227946   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:32.228404   84300 pod_ready.go:98] node "ha-107957-m04" hosting pod "kube-proxy-hm8zn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m04" has status "Ready":"Unknown"
	I0916 10:43:32.228425   84300 pod_ready.go:82] duration metric: took 375.265524ms for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	E0916 10:43:32.228436   84300 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m04" hosting pod "kube-proxy-hm8zn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m04" has status "Ready":"Unknown"
	I0916 10:43:32.228452   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:32.425460   84300 request.go:632] Waited for 196.916918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:43:32.425550   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:43:32.425560   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:32.425571   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:32.425577   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:32.428285   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:32.625380   84300 request.go:632] Waited for 196.354809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:43:32.625457   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:43:32.625466   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:32.625473   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:32.625479   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:32.628061   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:32.628504   84300 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:32.628522   84300 pod_ready.go:82] duration metric: took 400.060202ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:32.628532   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:32.825641   84300 request.go:632] Waited for 197.048977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:43:32.825726   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:43:32.825735   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:32.825744   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:32.825750   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:32.828435   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:33.025195   84300 request.go:632] Waited for 196.158007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:33.025271   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:33.025276   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:33.025284   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:33.025289   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:33.027942   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:33.028344   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:33.028361   84300 pod_ready.go:82] duration metric: took 399.82202ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:33.028373   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:33.225310   84300 request.go:632] Waited for 196.852861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:43:33.225398   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:43:33.225408   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:33.225416   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:33.225420   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:33.227708   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:33.425595   84300 request.go:632] Waited for 197.349732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:43:33.425682   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:43:33.425694   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:33.425707   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:33.425718   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:33.428337   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:33.428905   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:33.428927   84300 pod_ready.go:82] duration metric: took 400.54658ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:33.428941   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:33.624906   84300 request.go:632] Waited for 195.894005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:43:33.624966   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:43:33.624984   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:33.624991   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:33.624997   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:33.627676   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:33.825552   84300 request.go:632] Waited for 197.379796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:33.825623   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:33.825631   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:33.825639   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:33.825648   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:33.828476   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:33.829000   84300 pod_ready.go:98] node "ha-107957-m03" hosting pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m03" has status "Ready":"Unknown"
	I0916 10:43:33.829021   84300 pod_ready.go:82] duration metric: took 400.072348ms for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:43:33.829031   84300 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m03" has status "Ready":"Unknown"
	I0916 10:43:33.829044   84300 pod_ready.go:39] duration metric: took 1m19.622403733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:43:33.829059   84300 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:43:33.829088   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:43:33.829136   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:43:33.862495   84300 cri.go:89] found id: "1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f"
	I0916 10:43:33.862514   84300 cri.go:89] found id: "6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982"
	I0916 10:43:33.862518   84300 cri.go:89] found id: ""
	I0916 10:43:33.862525   84300 logs.go:276] 2 containers: [1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f 6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982]
	I0916 10:43:33.862577   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:33.865862   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:33.869000   84300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:43:33.869052   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:43:33.901295   84300 cri.go:89] found id: "975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48"
	I0916 10:43:33.901320   84300 cri.go:89] found id: "34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9"
	I0916 10:43:33.901324   84300 cri.go:89] found id: ""
	I0916 10:43:33.901355   84300 logs.go:276] 2 containers: [975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48 34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9]
	I0916 10:43:33.901411   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:33.904687   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:33.907693   84300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:43:33.907748   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:43:33.939558   84300 cri.go:89] found id: ""
	I0916 10:43:33.939582   84300 logs.go:276] 0 containers: []
	W0916 10:43:33.939591   84300 logs.go:278] No container was found matching "coredns"
	I0916 10:43:33.939597   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:43:33.939654   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:43:33.972581   84300 cri.go:89] found id: "8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50"
	I0916 10:43:33.972603   84300 cri.go:89] found id: "957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2"
	I0916 10:43:33.972607   84300 cri.go:89] found id: ""
	I0916 10:43:33.972613   84300 logs.go:276] 2 containers: [8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50 957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2]
	I0916 10:43:33.972667   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:33.976199   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:33.979323   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:43:33.979381   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:43:34.011516   84300 cri.go:89] found id: "c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62"
	I0916 10:43:34.011540   84300 cri.go:89] found id: ""
	I0916 10:43:34.011547   84300 logs.go:276] 1 containers: [c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62]
	I0916 10:43:34.011613   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:34.014922   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:43:34.014991   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:43:34.047899   84300 cri.go:89] found id: "86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5"
	I0916 10:43:34.047919   84300 cri.go:89] found id: "7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e"
	I0916 10:43:34.047923   84300 cri.go:89] found id: ""
	I0916 10:43:34.047930   84300 logs.go:276] 2 containers: [86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5 7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e]
	I0916 10:43:34.047972   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:34.051189   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:34.054303   84300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:43:34.054372   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:43:34.086865   84300 cri.go:89] found id: "efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04"
	I0916 10:43:34.086892   84300 cri.go:89] found id: ""
	I0916 10:43:34.086900   84300 logs.go:276] 1 containers: [efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04]
	I0916 10:43:34.086954   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:34.090453   84300 logs.go:123] Gathering logs for kube-scheduler [957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2] ...
	I0916 10:43:34.090483   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2"
	I0916 10:43:34.126397   84300 logs.go:123] Gathering logs for kindnet [efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04] ...
	I0916 10:43:34.126427   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04"
	I0916 10:43:34.161152   84300 logs.go:123] Gathering logs for kubelet ...
	I0916 10:43:34.161178   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:43:34.222085   84300 logs.go:123] Gathering logs for kube-apiserver [6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982] ...
	I0916 10:43:34.222125   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982"
	I0916 10:43:34.259325   84300 logs.go:123] Gathering logs for kube-scheduler [8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50] ...
	I0916 10:43:34.259356   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50"
	I0916 10:43:34.303204   84300 logs.go:123] Gathering logs for kube-controller-manager [7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e] ...
	I0916 10:43:34.303235   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e"
	I0916 10:43:34.335862   84300 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:43:34.335891   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:43:34.394279   84300 logs.go:123] Gathering logs for container status ...
	I0916 10:43:34.394314   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:43:34.432675   84300 logs.go:123] Gathering logs for kube-apiserver [1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f] ...
	I0916 10:43:34.432715   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f"
	I0916 10:43:34.470470   84300 logs.go:123] Gathering logs for etcd [34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9] ...
	I0916 10:43:34.470504   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9"
	I0916 10:43:34.516699   84300 logs.go:123] Gathering logs for kube-proxy [c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62] ...
	I0916 10:43:34.516731   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62"
	I0916 10:43:34.549266   84300 logs.go:123] Gathering logs for dmesg ...
	I0916 10:43:34.549291   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:43:34.563145   84300 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:43:34.563183   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:43:34.754663   84300 logs.go:123] Gathering logs for etcd [975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48] ...
	I0916 10:43:34.754697   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48"
	I0916 10:43:34.798232   84300 logs.go:123] Gathering logs for kube-controller-manager [86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5] ...
	I0916 10:43:34.798263   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5"
	I0916 10:43:37.347213   84300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:43:37.358398   84300 api_server.go:72] duration metric: took 1m38.59690106s to wait for apiserver process to appear ...
	I0916 10:43:37.358426   84300 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:43:37.358462   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:43:37.358511   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:43:37.390450   84300 cri.go:89] found id: "1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f"
	I0916 10:43:37.390474   84300 cri.go:89] found id: "6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982"
	I0916 10:43:37.390478   84300 cri.go:89] found id: ""
	I0916 10:43:37.390484   84300 logs.go:276] 2 containers: [1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f 6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982]
	I0916 10:43:37.390537   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.394303   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.397588   84300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:43:37.397676   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:43:37.430459   84300 cri.go:89] found id: "975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48"
	I0916 10:43:37.430482   84300 cri.go:89] found id: "34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9"
	I0916 10:43:37.430490   84300 cri.go:89] found id: ""
	I0916 10:43:37.430498   84300 logs.go:276] 2 containers: [975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48 34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9]
	I0916 10:43:37.430554   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.434087   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.437313   84300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:43:37.437407   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:43:37.468989   84300 cri.go:89] found id: ""
	I0916 10:43:37.469017   84300 logs.go:276] 0 containers: []
	W0916 10:43:37.469030   84300 logs.go:278] No container was found matching "coredns"
	I0916 10:43:37.469038   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:43:37.469090   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:43:37.501033   84300 cri.go:89] found id: "8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50"
	I0916 10:43:37.501054   84300 cri.go:89] found id: "957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2"
	I0916 10:43:37.501058   84300 cri.go:89] found id: ""
	I0916 10:43:37.501064   84300 logs.go:276] 2 containers: [8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50 957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2]
	I0916 10:43:37.501111   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.504490   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.507568   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:43:37.507624   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:43:37.540853   84300 cri.go:89] found id: "c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62"
	I0916 10:43:37.540875   84300 cri.go:89] found id: ""
	I0916 10:43:37.540882   84300 logs.go:276] 1 containers: [c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62]
	I0916 10:43:37.540927   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.544192   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:43:37.544250   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:43:37.575419   84300 cri.go:89] found id: "86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5"
	I0916 10:43:37.575445   84300 cri.go:89] found id: "7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e"
	I0916 10:43:37.575451   84300 cri.go:89] found id: ""
	I0916 10:43:37.575460   84300 logs.go:276] 2 containers: [86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5 7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e]
	I0916 10:43:37.575505   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.578805   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.581890   84300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:43:37.581948   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:43:37.613720   84300 cri.go:89] found id: "efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04"
	I0916 10:43:37.613743   84300 cri.go:89] found id: ""
	I0916 10:43:37.613750   84300 logs.go:276] 1 containers: [efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04]
	I0916 10:43:37.613801   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:37.617245   84300 logs.go:123] Gathering logs for dmesg ...
	I0916 10:43:37.617270   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:43:37.632019   84300 logs.go:123] Gathering logs for etcd [34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9] ...
	I0916 10:43:37.632049   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9"
	I0916 10:43:37.677063   84300 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:43:37.677099   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:43:37.735645   84300 logs.go:123] Gathering logs for kube-proxy [c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62] ...
	I0916 10:43:37.735680   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62"
	I0916 10:43:37.768526   84300 logs.go:123] Gathering logs for kube-apiserver [1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f] ...
	I0916 10:43:37.768556   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f"
	I0916 10:43:37.806608   84300 logs.go:123] Gathering logs for etcd [975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48] ...
	I0916 10:43:37.806640   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48"
	I0916 10:43:37.850008   84300 logs.go:123] Gathering logs for kube-scheduler [957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2] ...
	I0916 10:43:37.850038   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2"
	I0916 10:43:37.884854   84300 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:43:37.884887   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:43:38.073616   84300 logs.go:123] Gathering logs for kube-controller-manager [86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5] ...
	I0916 10:43:38.073656   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5"
	I0916 10:43:38.121521   84300 logs.go:123] Gathering logs for container status ...
	I0916 10:43:38.121556   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:43:38.160117   84300 logs.go:123] Gathering logs for kube-controller-manager [7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e] ...
	I0916 10:43:38.160146   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e"
	I0916 10:43:38.193486   84300 logs.go:123] Gathering logs for kindnet [efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04] ...
	I0916 10:43:38.193517   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04"
	I0916 10:43:38.228007   84300 logs.go:123] Gathering logs for kubelet ...
	I0916 10:43:38.228036   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:43:38.292697   84300 logs.go:123] Gathering logs for kube-apiserver [6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982] ...
	I0916 10:43:38.292735   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982"
	I0916 10:43:38.330116   84300 logs.go:123] Gathering logs for kube-scheduler [8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50] ...
	I0916 10:43:38.330156   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50"
	I0916 10:43:40.879478   84300 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:43:40.885377   84300 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:43:40.885470   84300 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:43:40.885480   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:40.885488   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:40.885491   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:40.886192   84300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:43:40.886290   84300 api_server.go:141] control plane version: v1.31.1
	I0916 10:43:40.886305   84300 api_server.go:131] duration metric: took 3.527871238s to wait for apiserver health ...
	I0916 10:43:40.886314   84300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:43:40.886339   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 10:43:40.886386   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 10:43:40.918788   84300 cri.go:89] found id: "1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f"
	I0916 10:43:40.918809   84300 cri.go:89] found id: "6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982"
	I0916 10:43:40.918813   84300 cri.go:89] found id: ""
	I0916 10:43:40.918820   84300 logs.go:276] 2 containers: [1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f 6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982]
	I0916 10:43:40.918865   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:40.922130   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:40.925172   84300 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 10:43:40.925225   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 10:43:40.956987   84300 cri.go:89] found id: "975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48"
	I0916 10:43:40.957014   84300 cri.go:89] found id: "34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9"
	I0916 10:43:40.957019   84300 cri.go:89] found id: ""
	I0916 10:43:40.957028   84300 logs.go:276] 2 containers: [975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48 34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9]
	I0916 10:43:40.957085   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:40.960464   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:40.963575   84300 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 10:43:40.963628   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 10:43:40.995484   84300 cri.go:89] found id: ""
	I0916 10:43:40.995506   84300 logs.go:276] 0 containers: []
	W0916 10:43:40.995513   84300 logs.go:278] No container was found matching "coredns"
	I0916 10:43:40.995519   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 10:43:40.995577   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 10:43:41.027767   84300 cri.go:89] found id: "8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50"
	I0916 10:43:41.027793   84300 cri.go:89] found id: "957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2"
	I0916 10:43:41.027798   84300 cri.go:89] found id: ""
	I0916 10:43:41.027805   84300 logs.go:276] 2 containers: [8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50 957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2]
	I0916 10:43:41.027867   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:41.031207   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:41.034278   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 10:43:41.034336   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 10:43:41.067407   84300 cri.go:89] found id: "c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62"
	I0916 10:43:41.067427   84300 cri.go:89] found id: ""
	I0916 10:43:41.067434   84300 logs.go:276] 1 containers: [c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62]
	I0916 10:43:41.067487   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:41.070893   84300 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 10:43:41.070965   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 10:43:41.103319   84300 cri.go:89] found id: "86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5"
	I0916 10:43:41.103350   84300 cri.go:89] found id: "7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e"
	I0916 10:43:41.103361   84300 cri.go:89] found id: ""
	I0916 10:43:41.103373   84300 logs.go:276] 2 containers: [86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5 7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e]
	I0916 10:43:41.103442   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:41.106945   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:41.110017   84300 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 10:43:41.110079   84300 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 10:43:41.142296   84300 cri.go:89] found id: "efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04"
	I0916 10:43:41.142325   84300 cri.go:89] found id: ""
	I0916 10:43:41.142335   84300 logs.go:276] 1 containers: [efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04]
	I0916 10:43:41.142394   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:41.145673   84300 logs.go:123] Gathering logs for etcd [975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48] ...
	I0916 10:43:41.145698   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 975e89a277c6f8027b037b5633e586209e010b0e3bafe4f3af5d99b25c40ab48"
	I0916 10:43:41.190902   84300 logs.go:123] Gathering logs for kube-proxy [c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62] ...
	I0916 10:43:41.190940   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2db7b7d696ef37a4d71ec179a1fa451384d88966a544d199175e1fa72c28b62"
	I0916 10:43:41.225492   84300 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:43:41.225517   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:43:41.459448   84300 logs.go:123] Gathering logs for kube-controller-manager [86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5] ...
	I0916 10:43:41.459479   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86cec437e1b5a9d3b13a78fe3564e14e93c81a648e403543132f61e5c2fb32d5"
	I0916 10:43:41.508302   84300 logs.go:123] Gathering logs for kindnet [efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04] ...
	I0916 10:43:41.508336   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efe8defba4931af9be1d4d038f43522199bb6291340575b3bd4ad2102a05ad04"
	I0916 10:43:41.545997   84300 logs.go:123] Gathering logs for kube-scheduler [8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50] ...
	I0916 10:43:41.546031   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eb0e560ccfa9b55ceae4ef322cc4aee2289728e4a10116a5753eac7cb0bba50"
	I0916 10:43:41.592283   84300 logs.go:123] Gathering logs for kube-apiserver [6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982] ...
	I0916 10:43:41.592320   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e8db56e136e84d9c8a0a08797c3ea0d775790f43d112bba31b3f3f789778982"
	I0916 10:43:41.627763   84300 logs.go:123] Gathering logs for kube-scheduler [957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2] ...
	I0916 10:43:41.627794   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 957ce3aaa980de8050639f1ba31faffbeaea265d144a7be7d41d0b87443ff9f2"
	I0916 10:43:41.659139   84300 logs.go:123] Gathering logs for kube-controller-manager [7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e] ...
	I0916 10:43:41.659163   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a63462ceb5a087dc6ef451644300f5355cbf2a06a782ab19dba55df84fccf8e"
	I0916 10:43:41.691348   84300 logs.go:123] Gathering logs for kube-apiserver [1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f] ...
	I0916 10:43:41.691373   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ce0906352e084a757a55862f75a5255176f1982c0f5156adf82bca18ec4b64f"
	I0916 10:43:41.730895   84300 logs.go:123] Gathering logs for dmesg ...
	I0916 10:43:41.730929   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:43:41.744215   84300 logs.go:123] Gathering logs for etcd [34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9] ...
	I0916 10:43:41.744245   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34f5e75ce7c2b11a7e08e8c8de2c5476cee54e35d385474b5b37cadc3467a1c9"
	I0916 10:43:41.789052   84300 logs.go:123] Gathering logs for CRI-O ...
	I0916 10:43:41.789086   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 10:43:41.842676   84300 logs.go:123] Gathering logs for container status ...
	I0916 10:43:41.842728   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:43:41.879023   84300 logs.go:123] Gathering logs for kubelet ...
	I0916 10:43:41.879051   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:43:44.444339   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:43:44.444365   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:44.444376   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:44.444379   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:44.449623   84300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:43:44.456234   84300 system_pods.go:59] 26 kube-system pods found
	I0916 10:43:44.456280   84300 system_pods.go:61] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:43:44.456289   84300 system_pods.go:61] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:43:44.456295   84300 system_pods.go:61] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:43:44.456303   84300 system_pods.go:61] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:43:44.456308   84300 system_pods.go:61] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:43:44.456317   84300 system_pods.go:61] "kindnet-4lkzl" [d08902f4-b63c-46cc-b388-c4fcbe8fc960] Running
	I0916 10:43:44.456322   84300 system_pods.go:61] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:43:44.456326   84300 system_pods.go:61] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:43:44.456332   84300 system_pods.go:61] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:43:44.456336   84300 system_pods.go:61] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:43:44.456340   84300 system_pods.go:61] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:43:44.456344   84300 system_pods.go:61] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:43:44.456350   84300 system_pods.go:61] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:43:44.456354   84300 system_pods.go:61] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:43:44.456360   84300 system_pods.go:61] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:43:44.456364   84300 system_pods.go:61] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:43:44.456370   84300 system_pods.go:61] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:43:44.456373   84300 system_pods.go:61] "kube-proxy-hm8zn" [6ea6916e-f34c-42b3-996b-033915687fd1] Running
	I0916 10:43:44.456380   84300 system_pods.go:61] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:43:44.456385   84300 system_pods.go:61] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:43:44.456393   84300 system_pods.go:61] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:43:44.456400   84300 system_pods.go:61] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:43:44.456412   84300 system_pods.go:61] "kube-vip-ha-107957" [d508299d-30c6-4f09-8f93-04280ddc9c11] Running
	I0916 10:43:44.456420   84300 system_pods.go:61] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:43:44.456424   84300 system_pods.go:61] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:43:44.456429   84300 system_pods.go:61] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:43:44.456436   84300 system_pods.go:74] duration metric: took 3.57011543s to wait for pod list to return data ...
	I0916 10:43:44.456445   84300 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:43:44.456536   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:43:44.456545   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:44.456552   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:44.456558   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:44.459401   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:44.459655   84300 default_sa.go:45] found service account: "default"
	I0916 10:43:44.459674   84300 default_sa.go:55] duration metric: took 3.220993ms for default service account to be created ...
	I0916 10:43:44.459683   84300 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:43:44.459762   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:43:44.459771   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:44.459781   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:44.459790   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:44.463992   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:43:44.470721   84300 system_pods.go:86] 26 kube-system pods found
	I0916 10:43:44.470747   84300 system_pods.go:89] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:43:44.470753   84300 system_pods.go:89] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:43:44.470758   84300 system_pods.go:89] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:43:44.470761   84300 system_pods.go:89] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:43:44.470765   84300 system_pods.go:89] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:43:44.470768   84300 system_pods.go:89] "kindnet-4lkzl" [d08902f4-b63c-46cc-b388-c4fcbe8fc960] Running
	I0916 10:43:44.470772   84300 system_pods.go:89] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:43:44.470775   84300 system_pods.go:89] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:43:44.470778   84300 system_pods.go:89] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:43:44.470782   84300 system_pods.go:89] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:43:44.470787   84300 system_pods.go:89] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:43:44.470793   84300 system_pods.go:89] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:43:44.470797   84300 system_pods.go:89] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:43:44.470801   84300 system_pods.go:89] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:43:44.470805   84300 system_pods.go:89] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:43:44.470810   84300 system_pods.go:89] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:43:44.470814   84300 system_pods.go:89] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:43:44.470817   84300 system_pods.go:89] "kube-proxy-hm8zn" [6ea6916e-f34c-42b3-996b-033915687fd1] Running
	I0916 10:43:44.470823   84300 system_pods.go:89] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:43:44.470826   84300 system_pods.go:89] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:43:44.470830   84300 system_pods.go:89] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:43:44.470837   84300 system_pods.go:89] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:43:44.470842   84300 system_pods.go:89] "kube-vip-ha-107957" [d508299d-30c6-4f09-8f93-04280ddc9c11] Running
	I0916 10:43:44.470850   84300 system_pods.go:89] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:43:44.470855   84300 system_pods.go:89] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:43:44.470862   84300 system_pods.go:89] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:43:44.470871   84300 system_pods.go:126] duration metric: took 11.17838ms to wait for k8s-apps to be running ...
	I0916 10:43:44.470884   84300 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:43:44.470935   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:43:44.481866   84300 system_svc.go:56] duration metric: took 10.974234ms WaitForService to wait for kubelet
	I0916 10:43:44.481895   84300 kubeadm.go:582] duration metric: took 1m45.72040265s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:43:44.481912   84300 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:43:44.481996   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:43:44.482005   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:44.482012   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:44.482016   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:44.484981   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:44.486169   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:43:44.486194   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:43:44.486209   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:43:44.486216   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:43:44.486221   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:43:44.486226   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:43:44.486232   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:43:44.486238   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:43:44.486244   84300 node_conditions.go:105] duration metric: took 4.327061ms to run NodePressure ...
	I0916 10:43:44.486260   84300 start.go:241] waiting for startup goroutines ...
	I0916 10:43:44.486292   84300 start.go:255] writing updated cluster config ...
	I0916 10:43:44.488494   84300 out.go:201] 
	I0916 10:43:44.489983   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:43:44.490087   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:43:44.491829   84300 out.go:177] * Starting "ha-107957-m03" control-plane node in "ha-107957" cluster
	I0916 10:43:44.493395   84300 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:43:44.494687   84300 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:43:44.495887   84300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:43:44.495908   84300 cache.go:56] Caching tarball of preloaded images
	I0916 10:43:44.495914   84300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:43:44.496008   84300 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:43:44.496021   84300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:43:44.496130   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:43:44.515346   84300 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:43:44.515363   84300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:43:44.515453   84300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:43:44.515472   84300 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:43:44.515478   84300 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:43:44.515491   84300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:43:44.515502   84300 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:43:44.516634   84300 image.go:273] response: 
	I0916 10:43:44.579091   84300 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:43:44.579137   84300 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:43:44.579178   84300 start.go:360] acquireMachinesLock for ha-107957-m03: {Name:mk0f035d5dad9998d086b052d83625d4474d070c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:43:44.579253   84300 start.go:364] duration metric: took 52.346µs to acquireMachinesLock for "ha-107957-m03"
	I0916 10:43:44.579278   84300 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:43:44.579286   84300 fix.go:54] fixHost starting: m03
	I0916 10:43:44.579580   84300 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:43:44.596053   84300 fix.go:112] recreateIfNeeded on ha-107957-m03: state=Stopped err=<nil>
	W0916 10:43:44.596087   84300 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:43:44.599031   84300 out.go:177] * Restarting existing docker container for "ha-107957-m03" ...
	I0916 10:43:44.600422   84300 cli_runner.go:164] Run: docker start ha-107957-m03
	I0916 10:43:44.887078   84300 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:43:44.906478   84300 kic.go:430] container "ha-107957-m03" state is running.
	I0916 10:43:44.906793   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:43:44.926310   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:43:44.926557   84300 machine.go:93] provisionDockerMachine start ...
	I0916 10:43:44.926628   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:44.944931   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:44.945179   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0916 10:43:44.945194   84300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:43:44.945969   84300 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46072->127.0.0.1:32818: read: connection reset by peer
	I0916 10:43:48.177392   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m03
	
	I0916 10:43:48.177422   84300 ubuntu.go:169] provisioning hostname "ha-107957-m03"
	I0916 10:43:48.177490   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:48.196348   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:48.196534   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0916 10:43:48.196547   84300 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m03 && echo "ha-107957-m03" | sudo tee /etc/hostname
	I0916 10:43:48.353264   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m03
	
	I0916 10:43:48.353372   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:48.371105   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:48.371286   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0916 10:43:48.371302   84300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:43:48.642219   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:43:48.642326   84300 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:43:48.642362   84300 ubuntu.go:177] setting up certificates
	I0916 10:43:48.642400   84300 provision.go:84] configureAuth start
	I0916 10:43:48.642484   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:43:48.664342   84300 provision.go:143] copyHostCerts
	I0916 10:43:48.664377   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:43:48.664413   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:43:48.664424   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:43:48.664493   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:43:48.664581   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:43:48.664610   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:43:48.664620   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:43:48.664660   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:43:48.664718   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:43:48.664743   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:43:48.664752   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:43:48.664786   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:43:48.664851   84300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m03 san=[127.0.0.1 192.168.49.4 ha-107957-m03 localhost minikube]
	I0916 10:43:48.831149   84300 provision.go:177] copyRemoteCerts
	I0916 10:43:48.831249   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:43:48.831285   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:48.848663   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:43:48.945974   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:43:48.946059   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:43:48.969409   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:43:48.969482   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:43:48.992552   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:43:48.992623   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:43:49.018820   84300 provision.go:87] duration metric: took 376.391469ms to configureAuth
	I0916 10:43:49.018848   84300 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:43:49.019060   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:43:49.019155   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:49.038405   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:49.038641   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0916 10:43:49.038664   84300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:43:50.366467   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:43:50.366491   84300 machine.go:96] duration metric: took 5.439919286s to provisionDockerMachine
	I0916 10:43:50.366503   84300 start.go:293] postStartSetup for "ha-107957-m03" (driver="docker")
	I0916 10:43:50.366513   84300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:43:50.366562   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:43:50.366604   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:50.384680   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:43:50.482380   84300 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:43:50.485565   84300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:43:50.485646   84300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:43:50.485667   84300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:43:50.485688   84300 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:43:50.485699   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:43:50.485757   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:43:50.485846   84300 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:43:50.485858   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:43:50.485967   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:43:50.494073   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:43:50.517138   84300 start.go:296] duration metric: took 150.61849ms for postStartSetup
	I0916 10:43:50.517229   84300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:43:50.517270   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:50.535083   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:43:50.627198   84300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:43:50.631872   84300 fix.go:56] duration metric: took 6.052579409s for fixHost
	I0916 10:43:50.631899   84300 start.go:83] releasing machines lock for "ha-107957-m03", held for 6.052631801s
	I0916 10:43:50.631968   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:43:50.660256   84300 out.go:177] * Found network options:
	I0916 10:43:50.661514   84300 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:43:50.663119   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:43:50.663144   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:43:50.663167   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:43:50.663176   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:43:50.663241   84300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:43:50.663280   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:50.663345   84300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:43:50.663401   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:43:50.682821   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:43:50.685693   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:43:51.201826   84300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:43:51.213238   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:43:51.226642   84300 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:43:51.226783   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:43:51.296658   84300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:43:51.296688   84300 start.go:495] detecting cgroup driver to use...
	I0916 10:43:51.296723   84300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:43:51.296774   84300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:43:51.312320   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:43:51.325544   84300 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:43:51.325597   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:43:51.399131   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:43:51.414487   84300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:43:51.630074   84300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:43:52.003211   84300 docker.go:233] disabling docker service ...
	I0916 10:43:52.003284   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:43:52.017620   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:43:52.031368   84300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:43:52.394847   84300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:43:52.631366   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:43:52.697636   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:43:52.717866   84300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:43:52.717937   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:43:52.795153   84300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:43:52.795296   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:43:52.807439   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:43:52.819465   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:43:52.830335   84300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:43:52.901902   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:43:52.913572   84300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:43:52.924449   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:43:52.994098   84300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:43:53.004145   84300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:43:53.012603   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:53.317004   84300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:43:53.652323   84300 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:43:53.652402   84300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:43:53.656355   84300 start.go:563] Will wait 60s for crictl version
	I0916 10:43:53.656407   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:43:53.660696   84300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:43:53.698333   84300 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:43:53.698426   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:43:53.738391   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:43:53.774437   84300 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:43:53.776029   84300 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:43:53.777505   84300 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:43:53.778910   84300 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:43:53.796101   84300 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:43:53.799788   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:43:53.810865   84300 mustload.go:65] Loading cluster: ha-107957
	I0916 10:43:53.811096   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:43:53.811291   84300 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:43:53.829876   84300 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:43:53.830181   84300 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.4
	I0916 10:43:53.830209   84300 certs.go:194] generating shared ca certs ...
	I0916 10:43:53.830228   84300 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:43:53.830354   84300 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:43:53.830411   84300 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:43:53.830426   84300 certs.go:256] generating profile certs ...
	I0916 10:43:53.830509   84300 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:43:53.830587   84300 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.d4dae518
	I0916 10:43:53.830670   84300 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:43:53.830686   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:43:53.830705   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:43:53.830724   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:43:53.830745   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:43:53.830764   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:43:53.830784   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:43:53.830805   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:43:53.830823   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:43:53.830900   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:43:53.830944   84300 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:43:53.830959   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:43:53.830994   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:43:53.831034   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:43:53.831068   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:43:53.831137   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:43:53.831178   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:43:53.831198   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:43:53.831218   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:43:53.831285   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:43:53.849230   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:43:53.937736   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:43:53.941412   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:43:53.953617   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:43:53.956937   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:43:53.968798   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:43:53.972242   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:43:53.984910   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:43:53.988518   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:43:54.000364   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:43:54.003738   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:43:54.015135   84300 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:43:54.019008   84300 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:43:54.030603   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:43:54.053637   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:43:54.074803   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:43:54.098495   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:43:54.122514   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:43:54.145417   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:43:54.168006   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:43:54.190321   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:43:54.214398   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:43:54.238054   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:43:54.260604   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:43:54.283092   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:43:54.299613   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:43:54.316797   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:43:54.333494   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:43:54.351648   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:43:54.369223   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:43:54.386396   84300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:43:54.404421   84300 ssh_runner.go:195] Run: openssl version
	I0916 10:43:54.409819   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:43:54.418848   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:43:54.422203   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:43:54.422267   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:43:54.428918   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:43:54.438501   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:43:54.447373   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:43:54.450584   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:43:54.450627   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:43:54.456759   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:43:54.464858   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:43:54.473580   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:43:54.476797   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:43:54.476855   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:43:54.483142   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:43:54.491253   84300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:43:54.495070   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:43:54.501321   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:43:54.507460   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:43:54.513719   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:43:54.520459   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:43:54.526959   84300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:43:54.533169   84300 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.1 crio true true} ...
	I0916 10:43:54.533286   84300 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:43:54.533318   84300 kube-vip.go:115] generating kube-vip config ...
	I0916 10:43:54.533378   84300 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:43:54.545066   84300 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:43:54.545134   84300 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:43:54.545191   84300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:43:54.553770   84300 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:43:54.553838   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:43:54.561624   84300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:43:54.578033   84300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:43:54.595125   84300 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:43:54.612720   84300 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:43:54.616057   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:43:54.626676   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:54.724002   84300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:43:54.735446   84300 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:43:54.735791   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:43:54.737511   84300 out.go:177] * Verifying Kubernetes components...
	I0916 10:43:54.738766   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:54.813634   84300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:43:54.825157   84300 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:43:54.825410   84300 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:43:54.825483   84300 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:43:54.825684   84300 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m03" to be "Ready" ...
	I0916 10:43:54.825766   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:54.825774   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:54.825782   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:54.825789   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:54.828501   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:55.326341   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:55.326363   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:55.326372   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:55.326376   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:55.329051   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:55.826590   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:55.826613   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:55.826623   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:55.826628   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:55.829406   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:56.326203   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:56.326224   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:56.326232   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:56.326236   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:56.328781   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:56.826787   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:56.826807   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:56.826815   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:56.826818   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:56.829543   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:56.830056   84300 node_ready.go:53] node "ha-107957-m03" has status "Ready":"Unknown"
	I0916 10:43:57.326421   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:57.326447   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:57.326456   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:57.326461   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:57.328828   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:57.826577   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:57.826599   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:57.826609   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:57.826614   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:57.829312   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:58.325905   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:58.325929   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.325937   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.325942   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.328991   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:43:58.329997   84300 node_ready.go:49] node "ha-107957-m03" has status "Ready":"True"
	I0916 10:43:58.330024   84300 node_ready.go:38] duration metric: took 3.504320993s for node "ha-107957-m03" to be "Ready" ...
	I0916 10:43:58.330037   84300 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:43:58.330127   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:43:58.330139   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.330150   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.330155   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.336153   84300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:43:58.344276   84300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.344380   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:43:58.344392   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.344404   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.344411   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.346872   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:58.347498   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:58.347516   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.347527   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.347536   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.349670   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:58.350180   84300 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:58.350204   84300 pod_ready.go:82] duration metric: took 5.899732ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.350217   84300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.350295   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:43:58.350307   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.350315   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.350324   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.352498   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:58.353283   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:58.353308   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.353318   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.353323   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.355330   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:43:58.355861   84300 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:58.355879   84300 pod_ready.go:82] duration metric: took 5.653453ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.355891   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.355948   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:43:58.355959   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.355968   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.355973   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.358127   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:58.358623   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:43:58.358636   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.358643   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.358647   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.360829   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:58.361327   84300 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:58.361369   84300 pod_ready.go:82] duration metric: took 5.4702ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.361387   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.361456   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:43:58.361467   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.361477   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.361485   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.363452   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:43:58.363951   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:43:58.363966   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.363973   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.363978   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.365865   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:43:58.366232   84300 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:43:58.366248   84300 pod_ready.go:82] duration metric: took 4.854754ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.366257   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:43:58.526513   84300 request.go:632] Waited for 160.181988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:43:58.526573   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:43:58.526578   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.526585   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.526591   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.528886   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:58.726805   84300 request.go:632] Waited for 197.355997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:58.726946   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:58.726978   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.727004   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.727018   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.732656   84300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:43:58.926354   84300 request.go:632] Waited for 59.228337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:43:58.926426   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:43:58.926434   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:58.926441   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:58.926448   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:58.929151   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:59.126006   84300 request.go:632] Waited for 196.28923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:59.126068   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:59.126075   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:59.126082   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:59.126089   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:59.128791   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:59.366461   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:43:59.366485   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:59.366493   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:59.366497   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:59.369404   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:59.526372   84300 request.go:632] Waited for 156.355255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:59.526441   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:59.526447   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:59.526455   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:59.526463   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:59.528962   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:59.866590   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:43:59.866619   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:59.866631   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:59.866636   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:59.869355   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:43:59.926232   84300 request.go:632] Waited for 56.280646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:59.926285   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:43:59.926290   84300 round_trippers.go:469] Request Headers:
	I0916 10:43:59.926302   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:43:59.926306   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:43:59.928925   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:00.366533   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:00.366556   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:00.366564   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:00.366569   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:00.369284   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:00.369840   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:00.369856   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:00.369863   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:00.369867   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:00.372155   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:00.372589   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:00.866947   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:00.866973   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:00.866982   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:00.866990   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:00.869695   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:00.870283   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:00.870302   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:00.870310   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:00.870314   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:00.872327   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:01.367198   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:01.367221   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:01.367229   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:01.367235   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:01.370228   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:01.370899   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:01.370916   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:01.370924   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:01.370929   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:01.373265   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:01.866464   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:01.866485   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:01.866493   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:01.866498   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:01.869260   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:01.869865   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:01.869882   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:01.869889   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:01.869896   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:01.872051   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:02.366793   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:02.366815   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:02.366823   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:02.366827   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:02.369514   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:02.370220   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:02.370243   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:02.370254   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:02.370260   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:02.372701   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:02.373206   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:02.866545   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:02.866566   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:02.866574   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:02.866578   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:02.869439   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:02.870079   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:02.870098   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:02.870107   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:02.870115   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:02.872515   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:03.367330   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:03.367353   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:03.367361   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:03.367367   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:03.370213   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:03.370790   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:03.370804   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:03.370812   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:03.370816   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:03.373081   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:03.866580   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:03.866605   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:03.866616   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:03.866621   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:03.869289   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:03.869841   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:03.869856   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:03.869864   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:03.869868   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:03.872130   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:04.366957   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:04.366986   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:04.366999   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:04.367003   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:04.370092   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:04.370703   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:04.370720   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:04.370727   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:04.370731   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:04.373087   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:04.373580   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:04.866882   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:04.866906   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:04.866914   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:04.866917   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:04.869839   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:04.870455   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:04.870470   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:04.870479   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:04.870484   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:04.872719   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:05.366586   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:05.366613   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:05.366621   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:05.366627   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:05.369401   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:05.370010   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:05.370031   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:05.370038   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:05.370049   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:05.372402   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:05.867448   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:05.867473   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:05.867480   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:05.867486   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:05.870162   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:05.870711   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:05.870726   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:05.870733   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:05.870738   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:05.872993   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:06.366834   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:06.366860   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:06.366869   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:06.366873   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:06.369746   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:06.370411   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:06.370429   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:06.370436   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:06.370440   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:06.372685   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:06.866665   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:06.866689   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:06.866697   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:06.866702   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:06.869464   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:06.870140   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:06.870155   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:06.870162   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:06.870165   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:06.872292   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:06.872728   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:07.367161   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:07.367185   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:07.367193   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:07.367196   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:07.370151   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:07.370760   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:07.370777   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:07.370785   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:07.370790   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:07.373003   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:07.866818   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:07.866848   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:07.866857   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:07.866861   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:07.869535   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:07.870203   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:07.870217   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:07.870225   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:07.870228   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:07.872344   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:08.367210   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:08.367237   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:08.367248   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:08.367255   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:08.370199   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:08.370770   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:08.370787   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:08.370794   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:08.370798   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:08.372846   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:08.866844   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:08.866866   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:08.866873   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:08.866877   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:08.869788   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:08.870416   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:08.870435   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:08.870443   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:08.870446   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:08.872568   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:08.873014   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:09.367442   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:09.367466   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:09.367473   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:09.367479   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:09.370338   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:09.370884   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:09.370900   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:09.370908   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:09.370912   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:09.373256   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:09.867202   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:09.867222   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:09.867230   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:09.867234   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:09.870030   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:09.870669   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:09.870684   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:09.870692   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:09.870696   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:09.873085   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:10.366569   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:10.366593   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:10.366601   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:10.366604   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:10.369208   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:10.369946   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:10.369963   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:10.369974   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:10.369979   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:10.372161   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:10.866671   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:10.866692   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:10.866701   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:10.866704   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:10.869725   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:10.870265   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:10.870281   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:10.870289   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:10.870293   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:10.872786   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:10.873250   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:11.366542   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:11.366570   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:11.366581   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:11.366586   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:11.369530   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:11.370113   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:11.370130   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:11.370135   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:11.370139   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:11.372580   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:11.866956   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:11.867058   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:11.867085   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:11.867108   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:11.870676   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:11.871502   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:11.871521   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:11.871531   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:11.871536   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:11.874320   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:12.367178   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:12.367200   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:12.367208   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:12.367213   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:12.369984   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:12.370582   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:12.370599   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:12.370606   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:12.370609   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:12.372847   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:12.866758   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:12.866778   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:12.866786   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:12.866790   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:12.869579   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:12.870261   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:12.870275   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:12.870283   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:12.870288   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:12.872477   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:13.367359   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:13.367382   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:13.367394   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:13.367398   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:13.370071   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:13.370593   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:13.370609   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:13.370616   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:13.370620   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:13.372621   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:13.373026   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:13.867476   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:13.867502   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:13.867513   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:13.867520   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:13.870209   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:13.870772   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:13.870789   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:13.870796   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:13.870800   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:13.872952   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:14.366515   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:14.366539   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:14.366547   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:14.366552   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:14.369240   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:14.369880   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:14.369905   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:14.369913   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:14.369916   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:14.372099   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:14.866911   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:14.866930   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:14.866936   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:14.866939   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:14.869587   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:14.870235   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:14.870254   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:14.870261   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:14.870266   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:14.872356   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:15.367292   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:15.367316   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:15.367327   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:15.367333   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:15.369898   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:15.370578   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:15.370594   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:15.370601   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:15.370605   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:15.372692   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:15.373206   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:15.867218   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:15.867241   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:15.867250   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:15.867254   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:15.870123   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:15.870652   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:15.870668   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:15.870678   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:15.870684   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:15.872829   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:16.366692   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:16.366712   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:16.366722   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:16.366728   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:16.369153   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:16.369768   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:16.369783   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:16.369792   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:16.369798   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:16.372169   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:16.867117   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:16.867137   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:16.867144   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:16.867148   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:16.869594   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:16.870257   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:16.870274   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:16.870283   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:16.870288   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:16.872399   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:17.367214   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:17.367235   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:17.367243   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:17.367247   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:17.370120   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:17.370765   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:17.370786   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:17.370792   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:17.370796   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:17.372948   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:17.373413   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:17.866702   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:17.866722   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:17.866729   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:17.866734   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:17.869464   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:17.870121   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:17.870141   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:17.870149   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:17.870153   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:17.872143   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:18.366576   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:18.366601   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:18.366610   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:18.366616   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:18.369456   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:18.370101   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:18.370121   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:18.370133   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:18.370139   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:18.372203   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:18.867159   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:18.867179   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:18.867186   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:18.867190   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:18.869901   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:18.870613   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:18.870630   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:18.870635   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:18.870639   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:18.872847   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:19.366687   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:19.366707   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:19.366716   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:19.366720   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:19.369585   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:19.370168   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:19.370186   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:19.370196   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:19.370202   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:19.372287   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:19.867086   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:19.867106   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:19.867113   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:19.867117   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:19.869919   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:19.870525   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:19.870541   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:19.870551   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:19.870559   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:19.872628   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:19.873103   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:20.366512   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:20.366537   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:20.366548   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:20.366552   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:20.369244   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:20.369870   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:20.369886   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:20.369893   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:20.369898   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:20.372172   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:20.867140   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:20.867160   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:20.867168   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:20.867172   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:20.869855   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:20.870487   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:20.870503   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:20.870515   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:20.870521   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:20.872780   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:21.366530   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:21.366550   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:21.366558   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:21.366561   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:21.369555   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:21.370246   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:21.370262   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:21.370270   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:21.370274   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:21.372742   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:21.866974   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:21.866994   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:21.867001   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:21.867005   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:21.869743   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:21.870468   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:21.870487   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:21.870498   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:21.870504   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:21.872857   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:21.873310   84300 pod_ready.go:103] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:22.366644   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:22.366672   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:22.366680   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:22.366683   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:22.371195   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:22.371787   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:22.371803   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:22.371814   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:22.371819   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:22.374235   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:22.866572   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:22.866640   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:22.866652   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:22.866656   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:22.869249   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:22.869866   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:22.869881   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:22.869890   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:22.869894   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:22.871995   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.366837   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:23.366865   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.366877   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.366881   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.369853   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.370439   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:23.370457   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.370464   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.370470   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.372681   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.866431   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:23.866451   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.866458   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.866462   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.869262   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.869919   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:23.869935   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.869942   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.869945   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.872190   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.872666   84300 pod_ready.go:93] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:23.872689   84300 pod_ready.go:82] duration metric: took 25.50642474s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.872706   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.872761   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:44:23.872771   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.872778   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.872781   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.874850   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.875401   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:23.875416   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.875423   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.875427   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.877753   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.878165   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:23.878183   84300 pod_ready.go:82] duration metric: took 5.468746ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.878192   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.878249   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:44:23.878257   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.878268   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.878274   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.882680   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:23.883291   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:23.883307   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.883314   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.883318   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.885456   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.885902   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:23.885921   84300 pod_ready.go:82] duration metric: took 7.723026ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.885931   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.885985   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:44:23.885994   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.886001   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.886005   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.888215   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.888831   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:23.888845   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.888853   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.888857   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.892270   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:23.892757   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:23.892776   84300 pod_ready.go:82] duration metric: took 6.838901ms for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.892784   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.892840   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:44:23.892845   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.892852   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.892857   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.894986   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:23.895565   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:23.895579   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:23.895586   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:23.895591   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:23.897293   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:23.897710   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:23.897725   84300 pod_ready.go:82] duration metric: took 4.935258ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:23.897734   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:24.067171   84300 request.go:632] Waited for 169.34832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:44:24.067253   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:44:24.067269   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:24.067280   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:24.067289   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:24.070121   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:24.267077   84300 request.go:632] Waited for 196.351476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:24.267134   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:24.267139   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:24.267147   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:24.267169   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:24.269777   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:24.270221   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:24.270238   84300 pod_ready.go:82] duration metric: took 372.498534ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:24.270247   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:24.467339   84300 request.go:632] Waited for 197.019568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:44:24.467417   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:44:24.467424   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:24.467435   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:24.467446   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:24.470287   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:24.667264   84300 request.go:632] Waited for 196.342566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:24.667323   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:24.667330   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:24.667339   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:24.667350   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:24.669931   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:24.670458   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:24.670479   84300 pod_ready.go:82] duration metric: took 400.224876ms for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:24.670493   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:24.867501   84300 request.go:632] Waited for 196.918902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:44:24.867565   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:44:24.867573   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:24.867584   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:24.867594   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:24.870406   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.067366   84300 request.go:632] Waited for 196.357878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:25.067422   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:25.067430   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:25.067446   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.067456   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.070046   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.070508   84300 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:25.070526   84300 pod_ready.go:82] duration metric: took 400.023474ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:25.070535   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:25.266473   84300 request.go:632] Waited for 195.855318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:44:25.266523   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:44:25.266528   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:25.266537   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.266542   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.269129   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.467131   84300 request.go:632] Waited for 197.395665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:25.467208   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:25.467214   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:25.467227   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.467236   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.470079   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.470530   84300 pod_ready.go:93] pod "kube-proxy-f2scr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:25.470546   84300 pod_ready.go:82] duration metric: took 400.005953ms for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:25.470556   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:25.666735   84300 request.go:632] Waited for 196.093045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:25.666806   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:25.666814   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:25.666824   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.666830   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.669827   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.866831   84300 request.go:632] Waited for 196.34919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:25.866904   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:25.866912   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:25.866920   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.866925   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.869614   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.870108   84300 pod_ready.go:98] node "ha-107957-m04" hosting pod "kube-proxy-hm8zn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m04" has status "Ready":"Unknown"
	I0916 10:44:25.870129   84300 pod_ready.go:82] duration metric: took 399.567853ms for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	E0916 10:44:25.870139   84300 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m04" hosting pod "kube-proxy-hm8zn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-107957-m04" has status "Ready":"Unknown"
	I0916 10:44:25.870148   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:26.067162   84300 request.go:632] Waited for 196.940283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:44:26.067296   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:44:26.067308   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:26.067315   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:26.067319   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:26.069979   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:26.266965   84300 request.go:632] Waited for 196.34188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:26.267035   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:26.267044   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:26.267052   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:26.267057   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:26.269567   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:26.270185   84300 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:26.270201   84300 pod_ready.go:82] duration metric: took 400.04741ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:26.270210   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:26.467227   84300 request.go:632] Waited for 196.956031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:44:26.467316   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:44:26.467328   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:26.467339   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:26.467346   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:26.469894   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:26.666853   84300 request.go:632] Waited for 196.34981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:26.666935   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:26.666947   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:26.666955   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:26.666959   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:26.669813   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:26.670257   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:26.670275   84300 pod_ready.go:82] duration metric: took 400.058875ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:26.670285   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:26.867486   84300 request.go:632] Waited for 197.121944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:44:26.867560   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:44:26.867570   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:26.867582   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:26.867590   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:26.870410   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:27.067293   84300 request.go:632] Waited for 196.362895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:27.067366   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:27.067377   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:27.067385   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:27.067390   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:27.070187   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:27.070672   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:27.070692   84300 pod_ready.go:82] duration metric: took 400.399973ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:27.070701   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:27.266700   84300 request.go:632] Waited for 195.93131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:44:27.266760   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:44:27.266766   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:27.266773   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:27.266776   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:27.269302   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:27.467004   84300 request.go:632] Waited for 197.157855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:27.467072   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:27.467079   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:27.467087   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:27.467093   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:27.469464   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:27.469939   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:27.469957   84300 pod_ready.go:82] duration metric: took 399.249765ms for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:27.469967   84300 pod_ready.go:39] duration metric: took 29.139918379s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:44:27.469986   84300 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:44:27.470040   84300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:44:27.480563   84300 api_server.go:72] duration metric: took 32.745075816s to wait for apiserver process to appear ...
	I0916 10:44:27.480586   84300 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:44:27.480608   84300 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:44:27.484194   84300 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:44:27.484252   84300 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:44:27.484261   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:27.484269   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:27.484273   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:27.485005   84300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:44:27.485058   84300 api_server.go:141] control plane version: v1.31.1
	I0916 10:44:27.485070   84300 api_server.go:131] duration metric: took 4.47853ms to wait for apiserver health ...
	I0916 10:44:27.485077   84300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:44:27.667486   84300 request.go:632] Waited for 182.327777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:27.667542   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:27.667549   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:27.667559   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:27.667567   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:27.672664   84300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:44:27.680580   84300 system_pods.go:59] 26 kube-system pods found
	I0916 10:44:27.680621   84300 system_pods.go:61] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:44:27.680628   84300 system_pods.go:61] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:44:27.680640   84300 system_pods.go:61] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:44:27.680645   84300 system_pods.go:61] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:44:27.680651   84300 system_pods.go:61] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:44:27.680656   84300 system_pods.go:61] "kindnet-4lkzl" [d08902f4-b63c-46cc-b388-c4fcbe8fc960] Running
	I0916 10:44:27.680661   84300 system_pods.go:61] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:44:27.680666   84300 system_pods.go:61] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:44:27.680685   84300 system_pods.go:61] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:44:27.680692   84300 system_pods.go:61] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:44:27.680698   84300 system_pods.go:61] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:44:27.680703   84300 system_pods.go:61] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:44:27.680709   84300 system_pods.go:61] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:44:27.680717   84300 system_pods.go:61] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:44:27.680725   84300 system_pods.go:61] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:44:27.680734   84300 system_pods.go:61] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:44:27.680738   84300 system_pods.go:61] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:44:27.680742   84300 system_pods.go:61] "kube-proxy-hm8zn" [6ea6916e-f34c-42b3-996b-033915687fd1] Running
	I0916 10:44:27.680745   84300 system_pods.go:61] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:44:27.680751   84300 system_pods.go:61] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:44:27.680757   84300 system_pods.go:61] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:44:27.680762   84300 system_pods.go:61] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:44:27.680771   84300 system_pods.go:61] "kube-vip-ha-107957" [d508299d-30c6-4f09-8f93-04280ddc9c11] Running
	I0916 10:44:27.680776   84300 system_pods.go:61] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:44:27.680788   84300 system_pods.go:61] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:44:27.680794   84300 system_pods.go:61] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:44:27.680804   84300 system_pods.go:74] duration metric: took 195.718746ms to wait for pod list to return data ...
	I0916 10:44:27.680817   84300 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:44:27.866491   84300 request.go:632] Waited for 185.579397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:44:27.866558   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:44:27.866564   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:27.866571   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:27.866578   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:27.869493   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:27.869635   84300 default_sa.go:45] found service account: "default"
	I0916 10:44:27.869649   84300 default_sa.go:55] duration metric: took 188.824485ms for default service account to be created ...
	I0916 10:44:27.869658   84300 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:44:28.067116   84300 request.go:632] Waited for 197.389692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:28.067172   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:28.067178   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:28.067187   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:28.067193   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:28.072012   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:28.079837   84300 system_pods.go:86] 26 kube-system pods found
	I0916 10:44:28.079867   84300 system_pods.go:89] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running
	I0916 10:44:28.079873   84300 system_pods.go:89] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running
	I0916 10:44:28.079877   84300 system_pods.go:89] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:44:28.079882   84300 system_pods.go:89] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:44:28.079887   84300 system_pods.go:89] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:44:28.079891   84300 system_pods.go:89] "kindnet-4lkzl" [d08902f4-b63c-46cc-b388-c4fcbe8fc960] Running
	I0916 10:44:28.079894   84300 system_pods.go:89] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:44:28.079899   84300 system_pods.go:89] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:44:28.079903   84300 system_pods.go:89] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:44:28.079907   84300 system_pods.go:89] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:44:28.079935   84300 system_pods.go:89] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:44:28.079943   84300 system_pods.go:89] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:44:28.079948   84300 system_pods.go:89] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:44:28.079955   84300 system_pods.go:89] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:44:28.079959   84300 system_pods.go:89] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:44:28.079966   84300 system_pods.go:89] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:44:28.079970   84300 system_pods.go:89] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:44:28.079973   84300 system_pods.go:89] "kube-proxy-hm8zn" [6ea6916e-f34c-42b3-996b-033915687fd1] Running
	I0916 10:44:28.079977   84300 system_pods.go:89] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:44:28.079984   84300 system_pods.go:89] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:44:28.079987   84300 system_pods.go:89] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:44:28.079991   84300 system_pods.go:89] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:44:28.079995   84300 system_pods.go:89] "kube-vip-ha-107957" [d508299d-30c6-4f09-8f93-04280ddc9c11] Running
	I0916 10:44:28.079999   84300 system_pods.go:89] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:44:28.080002   84300 system_pods.go:89] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:44:28.080005   84300 system_pods.go:89] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:44:28.080011   84300 system_pods.go:126] duration metric: took 210.349343ms to wait for k8s-apps to be running ...
	I0916 10:44:28.080021   84300 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:44:28.080065   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:44:28.091226   84300 system_svc.go:56] duration metric: took 11.189026ms WaitForService to wait for kubelet
	I0916 10:44:28.091262   84300 kubeadm.go:582] duration metric: took 33.355775238s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:44:28.091289   84300 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:44:28.266716   84300 request.go:632] Waited for 175.326934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:28.266785   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:28.266791   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:28.266798   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:28.266804   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:28.269623   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:28.271062   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:28.271088   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:28.271102   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:28.271106   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:28.271110   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:28.271114   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:28.271118   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:28.271123   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:28.271133   84300 node_conditions.go:105] duration metric: took 179.838495ms to run NodePressure ...
	I0916 10:44:28.271151   84300 start.go:241] waiting for startup goroutines ...
	I0916 10:44:28.271180   84300 start.go:255] writing updated cluster config ...
	I0916 10:44:28.273608   84300 out.go:201] 
	I0916 10:44:28.275288   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:44:28.275410   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:44:28.277251   84300 out.go:177] * Starting "ha-107957-m04" worker node in "ha-107957" cluster
	I0916 10:44:28.278932   84300 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:44:28.280166   84300 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:44:28.281458   84300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:44:28.281482   84300 cache.go:56] Caching tarball of preloaded images
	I0916 10:44:28.281486   84300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:44:28.281585   84300 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:44:28.281602   84300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:44:28.281715   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:44:28.301295   84300 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:44:28.301314   84300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:44:28.301426   84300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:44:28.301447   84300 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:44:28.301456   84300 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:44:28.301465   84300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:44:28.301473   84300 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:44:28.302693   84300 image.go:273] response: 
	I0916 10:44:28.362139   84300 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:44:28.362184   84300 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:44:28.362221   84300 start.go:360] acquireMachinesLock for ha-107957-m04: {Name:mk140f36fe9b3ae2aca73cd487e78881b966d113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:44:28.362302   84300 start.go:364] duration metric: took 51.723µs to acquireMachinesLock for "ha-107957-m04"
	I0916 10:44:28.362328   84300 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:44:28.362336   84300 fix.go:54] fixHost starting: m04
	I0916 10:44:28.362641   84300 cli_runner.go:164] Run: docker container inspect ha-107957-m04 --format={{.State.Status}}
	I0916 10:44:28.379418   84300 fix.go:112] recreateIfNeeded on ha-107957-m04: state=Stopped err=<nil>
	W0916 10:44:28.379452   84300 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:44:28.382356   84300 out.go:177] * Restarting existing docker container for "ha-107957-m04" ...
	I0916 10:44:28.384095   84300 cli_runner.go:164] Run: docker start ha-107957-m04
	I0916 10:44:28.655990   84300 cli_runner.go:164] Run: docker container inspect ha-107957-m04 --format={{.State.Status}}
	I0916 10:44:28.675013   84300 kic.go:430] container "ha-107957-m04" state is running.
	I0916 10:44:28.675409   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m04
	I0916 10:44:28.694579   84300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:44:28.694822   84300 machine.go:93] provisionDockerMachine start ...
	I0916 10:44:28.694882   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:28.713092   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:28.713323   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0916 10:44:28.713369   84300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:44:28.714019   84300 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43914->127.0.0.1:32823: read: connection reset by peer
	I0916 10:44:31.849076   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m04
	
	I0916 10:44:31.849103   84300 ubuntu.go:169] provisioning hostname "ha-107957-m04"
	I0916 10:44:31.849152   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:31.867444   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:31.867627   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0916 10:44:31.867639   84300 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m04 && echo "ha-107957-m04" | sudo tee /etc/hostname
	I0916 10:44:32.008028   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m04
	
	I0916 10:44:32.008120   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:32.026391   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:32.026588   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0916 10:44:32.026612   84300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:44:32.157764   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:44:32.157800   84300 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:44:32.157822   84300 ubuntu.go:177] setting up certificates
	I0916 10:44:32.157836   84300 provision.go:84] configureAuth start
	I0916 10:44:32.157897   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m04
	I0916 10:44:32.175614   84300 provision.go:143] copyHostCerts
	I0916 10:44:32.175667   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:44:32.175706   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:44:32.175716   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:44:32.175784   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:44:32.175873   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:44:32.175894   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:44:32.175898   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:44:32.175922   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:44:32.175967   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:44:32.175996   84300 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:44:32.176002   84300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:44:32.176028   84300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:44:32.176100   84300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m04 san=[127.0.0.1 192.168.49.5 ha-107957-m04 localhost minikube]
	I0916 10:44:32.347221   84300 provision.go:177] copyRemoteCerts
	I0916 10:44:32.347282   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:44:32.347320   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:32.367005   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:44:32.462006   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:44:32.462082   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:44:32.485250   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:44:32.485309   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:44:32.507254   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:44:32.507325   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:44:32.530913   84300 provision.go:87] duration metric: took 373.061708ms to configureAuth
	I0916 10:44:32.530945   84300 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:44:32.531183   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:44:32.531292   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:32.548750   84300 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:32.548930   84300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0916 10:44:32.548945   84300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:44:32.775202   84300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:44:32.775233   84300 machine.go:96] duration metric: took 4.080387823s to provisionDockerMachine
	I0916 10:44:32.775247   84300 start.go:293] postStartSetup for "ha-107957-m04" (driver="docker")
	I0916 10:44:32.775260   84300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:44:32.775307   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:44:32.775342   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:32.793156   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:44:32.890878   84300 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:44:32.894127   84300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:44:32.894162   84300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:44:32.894170   84300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:44:32.894175   84300 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:44:32.894184   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:44:32.894236   84300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:44:32.894305   84300 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:44:32.894316   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:44:32.894393   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:44:32.902696   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:44:32.925028   84300 start.go:296] duration metric: took 149.764688ms for postStartSetup
	I0916 10:44:32.925109   84300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:44:32.925152   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:32.942492   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:44:33.034251   84300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:44:33.038744   84300 fix.go:56] duration metric: took 4.67640256s for fixHost
	I0916 10:44:33.038774   84300 start.go:83] releasing machines lock for "ha-107957-m04", held for 4.676456365s
	I0916 10:44:33.038846   84300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m04
	I0916 10:44:33.058630   84300 out.go:177] * Found network options:
	I0916 10:44:33.060298   84300 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W0916 10:44:33.061750   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:44:33.061774   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:44:33.061783   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:44:33.061804   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:44:33.061811   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:44:33.061824   84300 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:44:33.061895   84300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:44:33.061929   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:33.061970   84300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:44:33.062033   84300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:44:33.082299   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:44:33.082658   84300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:44:33.315135   84300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:44:33.319868   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:44:33.328138   84300 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:44:33.328210   84300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:44:33.336613   84300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:44:33.336644   84300 start.go:495] detecting cgroup driver to use...
	I0916 10:44:33.336694   84300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:44:33.336737   84300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:44:33.348402   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:44:33.359427   84300 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:44:33.359479   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:44:33.371403   84300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:44:33.382675   84300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:44:33.468171   84300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:44:33.551594   84300 docker.go:233] disabling docker service ...
	I0916 10:44:33.551681   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:44:33.564009   84300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:44:33.574957   84300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:44:33.651963   84300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:44:33.730618   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:44:33.741252   84300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:44:33.757216   84300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:44:33.757275   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:44:33.766595   84300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:44:33.766648   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:44:33.776009   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:44:33.785105   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:44:33.794366   84300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:44:33.802994   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:44:33.812666   84300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:44:33.821723   84300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:44:33.831291   84300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:44:33.839260   84300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:44:33.846994   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:33.923717   84300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:44:34.034040   84300 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:44:34.034105   84300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:44:34.037661   84300 start.go:563] Will wait 60s for crictl version
	I0916 10:44:34.037714   84300 ssh_runner.go:195] Run: which crictl
	I0916 10:44:34.040811   84300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:44:34.073479   84300 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:44:34.073560   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:44:34.108609   84300 ssh_runner.go:195] Run: crio --version
	I0916 10:44:34.145463   84300 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:44:34.147539   84300 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:44:34.149076   84300 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:44:34.150604   84300 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I0916 10:44:34.151951   84300 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:34.168537   84300 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:44:34.172098   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:34.182340   84300 mustload.go:65] Loading cluster: ha-107957
	I0916 10:44:34.182586   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:44:34.182828   84300 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:44:34.201256   84300 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:44:34.201526   84300 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.5
	I0916 10:44:34.201539   84300 certs.go:194] generating shared ca certs ...
	I0916 10:44:34.201557   84300 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:34.201697   84300 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:44:34.201760   84300 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:44:34.201776   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:44:34.201797   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:44:34.201815   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:44:34.201834   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:44:34.201892   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:44:34.201932   84300 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:44:34.201946   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:44:34.201981   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:44:34.202014   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:44:34.202051   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:44:34.202106   84300 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:44:34.202143   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:44:34.202163   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.202182   84300 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:44:34.202209   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:44:34.225589   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:44:34.248640   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:44:34.271131   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:44:34.292651   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:44:34.314764   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:44:34.337364   84300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:44:34.360286   84300 ssh_runner.go:195] Run: openssl version
	I0916 10:44:34.365499   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:44:34.376227   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.380061   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.380126   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.386682   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:44:34.395131   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:44:34.404215   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:44:34.407610   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:44:34.407673   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:44:34.414198   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:44:34.422795   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:44:34.431977   84300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:44:34.435406   84300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:44:34.435522   84300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:44:34.441821   84300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:44:34.450491   84300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:44:34.453617   84300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:44:34.453661   84300 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0916 10:44:34.453762   84300 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:44:34.453815   84300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:44:34.461907   84300 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:44:34.461964   84300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:44:34.469952   84300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:44:34.486567   84300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:44:34.503750   84300 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:44:34.507017   84300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:34.517791   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:34.595788   84300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:34.607128   84300 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0916 10:44:34.607385   84300 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:44:34.609886   84300 out.go:177] * Verifying Kubernetes components...
	I0916 10:44:34.611414   84300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:34.692174   84300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:34.704530   84300 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:44:34.704859   84300 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:44:34.704959   84300 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:44:34.705398   84300 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m04" to be "Ready" ...
	I0916 10:44:34.705509   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:34.705521   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:34.705531   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:34.705540   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:34.707807   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:35.206599   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:35.206620   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:35.206629   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:35.206637   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:35.209570   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:35.706391   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:35.706414   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:35.706422   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:35.706426   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:35.709028   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:36.206566   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:36.206587   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:36.206597   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:36.206602   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:36.209255   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:36.706244   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:36.706271   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:36.706282   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:36.706287   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:36.708941   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:36.709444   84300 node_ready.go:53] node "ha-107957-m04" has status "Ready":"Unknown"
	I0916 10:44:37.206632   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:37.206652   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:37.206661   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:37.206665   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:37.209490   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:37.706403   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:37.706429   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:37.706437   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:37.706441   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:37.709212   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:38.206618   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:38.206638   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:38.206650   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:38.206655   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:38.209557   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:38.706634   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:38.706656   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:38.706664   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:38.706669   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:38.709428   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:38.709966   84300 node_ready.go:53] node "ha-107957-m04" has status "Ready":"Unknown"
	I0916 10:44:39.206607   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:39.206626   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:39.206639   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:39.206642   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:39.209220   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:39.706071   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:39.706090   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:39.706098   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:39.706102   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:39.709087   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.206583   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:40.206603   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:40.206610   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.206614   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.209107   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.706580   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:40.706599   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:40.706607   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.706611   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.709167   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.206605   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:41.206626   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:41.206636   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.206641   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.209321   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.209970   84300 node_ready.go:53] node "ha-107957-m04" has status "Ready":"Unknown"
	I0916 10:44:41.706325   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:41.706347   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:41.706360   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.706366   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.709457   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:42.206603   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:42.206624   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.206640   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.206646   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.209272   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.209751   84300 node_ready.go:49] node "ha-107957-m04" has status "Ready":"True"
	I0916 10:44:42.209773   84300 node_ready.go:38] duration metric: took 7.504351288s for node "ha-107957-m04" to be "Ready" ...
	I0916 10:44:42.209783   84300 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:44:42.209852   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:42.209864   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.209873   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.209882   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.216024   84300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:44:42.223226   84300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.223339   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:44:42.223350   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.223361   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.223368   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.226070   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.226798   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:42.226817   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.226827   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.226833   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.228980   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.229476   84300 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:42.229493   84300 pod_ready.go:82] duration metric: took 6.238353ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.229502   84300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.229561   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:44:42.229569   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.229576   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.229580   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.231762   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.232314   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:42.232330   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.232340   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.232347   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.234529   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.235070   84300 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:42.235090   84300 pod_ready.go:82] duration metric: took 5.581744ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.235108   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.235188   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:44:42.235198   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.235207   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.235216   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.237494   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.238030   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:42.238045   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.238052   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.238057   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.240367   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.240777   84300 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:42.240794   84300 pod_ready.go:82] duration metric: took 5.679644ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.240804   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.240874   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:44:42.240883   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.240890   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.240894   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.243014   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.243521   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:42.243538   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.243545   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.243550   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.245610   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.246050   84300 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:42.246064   84300 pod_ready.go:82] duration metric: took 5.253974ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.246075   84300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.407487   84300 request.go:632] Waited for 161.326218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:42.407558   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:44:42.407567   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.407578   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.407590   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.410085   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.607058   84300 request.go:632] Waited for 196.375725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:42.607109   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:42.607115   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.607129   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.607136   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.609922   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.610504   84300 pod_ready.go:93] pod "etcd-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:42.610526   84300 pod_ready.go:82] duration metric: took 364.44318ms for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.610554   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:42.807450   84300 request.go:632] Waited for 196.811741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:44:42.807528   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:44:42.807539   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:42.807550   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.807558   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.810487   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.007529   84300 request.go:632] Waited for 196.389037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:43.007584   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:43.007591   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:43.007601   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.007605   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.010406   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.010847   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:43.010865   84300 pod_ready.go:82] duration metric: took 400.301628ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:43.010875   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:43.206951   84300 request.go:632] Waited for 196.010862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:44:43.207050   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:44:43.207059   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:43.207067   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.207073   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.210007   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.406933   84300 request.go:632] Waited for 196.246305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:43.407020   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:43.407027   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:43.407037   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.407044   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.409708   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.410160   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:43.410181   84300 pod_ready.go:82] duration metric: took 399.297462ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:43.410194   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:43.607206   84300 request.go:632] Waited for 196.919731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:44:43.607273   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:44:43.607280   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:43.607290   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.607296   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.610123   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.807046   84300 request.go:632] Waited for 196.24883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:43.807098   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:43.807105   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:43.807114   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.807125   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.809768   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.810224   84300 pod_ready.go:93] pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:43.810243   84300 pod_ready.go:82] duration metric: took 400.041628ms for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:43.810256   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:44.007404   84300 request.go:632] Waited for 197.061083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:44:44.007506   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:44:44.007518   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:44.007526   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.007534   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.010402   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.207465   84300 request.go:632] Waited for 196.374195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:44.207522   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:44.207527   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:44.207534   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.207545   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.210482   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.211038   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:44.211067   84300 pod_ready.go:82] duration metric: took 400.797666ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:44.211078   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:44.407026   84300 request.go:632] Waited for 195.871645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:44:44.407122   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:44:44.407132   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:44.407140   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.407146   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.409691   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.606978   84300 request.go:632] Waited for 196.361446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:44.607027   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:44.607033   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:44.607043   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.607049   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.609644   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.610112   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:44.610134   84300 pod_ready.go:82] duration metric: took 399.045359ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:44.610144   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:44.807140   84300 request.go:632] Waited for 196.914951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:44:44.807205   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:44:44.807212   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:44.807225   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.807235   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.810453   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:45.007471   84300 request.go:632] Waited for 196.410035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:45.007531   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:45.007536   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:45.007544   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.007550   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.010353   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.010794   84300 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:45.010810   84300 pod_ready.go:82] duration metric: took 400.661012ms for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:45.010819   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:45.206847   84300 request.go:632] Waited for 195.947015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:44:45.206912   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:44:45.206917   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:45.206924   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.206929   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.209734   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.407640   84300 request.go:632] Waited for 197.352806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:45.407712   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:45.407721   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:45.407728   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.407734   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.410321   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.410811   84300 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:45.410826   84300 pod_ready.go:82] duration metric: took 400.001703ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:45.410836   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:45.606938   84300 request.go:632] Waited for 196.035223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:44:45.607019   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:44:45.607030   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:45.607040   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.607046   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.609766   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.807498   84300 request.go:632] Waited for 197.139721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:45.807561   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:45.807568   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:45.807579   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.807590   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.810758   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:45.811228   84300 pod_ready.go:93] pod "kube-proxy-f2scr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:45.811247   84300 pod_ready.go:82] duration metric: took 400.405177ms for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:45.811259   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:46.007249   84300 request.go:632] Waited for 195.918445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:46.007312   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:46.007337   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:46.007351   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.007363   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.010042   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.207070   84300 request.go:632] Waited for 196.361529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:46.207163   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:46.207175   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:46.207184   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.207191   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.209877   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.407615   84300 request.go:632] Waited for 95.274606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:46.407667   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:46.407672   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:46.407679   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.407682   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.410577   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.607591   84300 request.go:632] Waited for 196.351537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:46.607654   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:46.607662   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:46.607671   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.607681   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.610344   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.812030   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:46.812060   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:46.812071   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.812076   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.815153   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:47.007141   84300 request.go:632] Waited for 191.397936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:47.007201   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:47.007207   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:47.007214   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.007218   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.009938   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.312380   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:47.312399   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:47.312407   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.312412   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.314955   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.407061   84300 request.go:632] Waited for 91.264846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:47.407128   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:47.407134   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:47.407141   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.407150   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.409835   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.811539   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:47.811559   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:47.811566   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.811570   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.813992   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.814757   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:47.814778   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:47.814806   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.814815   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.817731   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.818276   84300 pod_ready.go:103] pod "kube-proxy-hm8zn" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:48.311650   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:48.311682   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:48.311695   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.311702   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.314446   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.315168   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:48.315186   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:48.315196   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.315201   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.317506   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.812383   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:48.812408   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:48.812417   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.812423   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.814771   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.815546   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:48.815564   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:48.815576   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.815580   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.818127   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.312198   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:49.312221   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:49.312230   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.312236   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.315021   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.315621   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:49.315637   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:49.315644   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.315649   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.317655   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:49.812115   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:49.812136   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:49.812148   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.812152   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.815042   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.816291   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:49.816360   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:49.816382   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.816399   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.819838   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:49.820221   84300 pod_ready.go:103] pod "kube-proxy-hm8zn" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:50.312144   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:50.312163   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:50.312171   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.312176   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.314908   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.315573   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:50.315598   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:50.315609   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.315617   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.318017   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.811507   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:50.811529   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:50.811537   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.811541   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.814199   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.814799   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:50.814816   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:50.814823   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.814829   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.817078   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.311605   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:44:51.311630   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.311641   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.311646   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.314482   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.315195   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:44:51.315219   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.315228   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.315233   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.320155   84300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:51.320659   84300 pod_ready.go:93] pod "kube-proxy-hm8zn" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:51.320691   84300 pod_ready.go:82] duration metric: took 5.509423953s for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:51.320704   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:51.320790   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:44:51.320800   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.320810   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.320820   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.323066   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.323658   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:51.323675   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.323685   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.323692   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.325736   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.326138   84300 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:51.326152   84300 pod_ready.go:82] duration metric: took 5.433354ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:51.326161   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:51.326216   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:44:51.326224   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.326230   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.326235   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.328163   84300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:51.406815   84300 request.go:632] Waited for 78.230067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:51.406875   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:44:51.406880   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.406887   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.406891   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.409719   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.410142   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:51.410160   84300 pod_ready.go:82] duration metric: took 83.991784ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:51.410170   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:51.607646   84300 request.go:632] Waited for 197.357063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:44:51.607712   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:44:51.607719   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.607728   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.607733   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.610475   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.807268   84300 request.go:632] Waited for 196.066413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:51.807388   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:44:51.807408   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:51.807445   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.807462   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.810285   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.810782   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:51.810805   84300 pod_ready.go:82] duration metric: took 400.609416ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:51.810818   84300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:52.006934   84300 request.go:632] Waited for 196.044383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:44:52.007018   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:44:52.007030   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:52.007042   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.007054   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.009794   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.207652   84300 request.go:632] Waited for 197.353738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:52.207734   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:44:52.207742   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:52.207753   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.207761   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.210570   84300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.211049   84300 pod_ready.go:93] pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:52.211070   84300 pod_ready.go:82] duration metric: took 400.244522ms for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:52.211082   84300 pod_ready.go:39] duration metric: took 10.001288436s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:44:52.211098   84300 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:44:52.211147   84300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:44:52.222847   84300 system_svc.go:56] duration metric: took 11.738978ms WaitForService to wait for kubelet
	I0916 10:44:52.222882   84300 kubeadm.go:582] duration metric: took 17.615716999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:44:52.222918   84300 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:44:52.407346   84300 request.go:632] Waited for 184.350348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:52.407406   84300 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:52.407413   84300 round_trippers.go:469] Request Headers:
	I0916 10:44:52.407423   84300 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.407429   84300 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.410736   84300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:52.411824   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:52.411844   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:52.411853   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:52.411857   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:52.411861   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:52.411869   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:52.411877   84300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:52.411882   84300 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:52.411887   84300 node_conditions.go:105] duration metric: took 188.963013ms to run NodePressure ...
	I0916 10:44:52.411904   84300 start.go:241] waiting for startup goroutines ...
	I0916 10:44:52.411926   84300 start.go:255] writing updated cluster config ...
	I0916 10:44:52.412267   84300 ssh_runner.go:195] Run: rm -f paused
	I0916 10:44:52.418181   84300 out.go:177] * Done! kubectl is now configured to use "ha-107957" cluster and "default" namespace by default
	E0916 10:44:52.419565   84300 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:42:58 ha-107957 crio[680]: time="2024-09-16 10:42:58.018051022Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:42:59 ha-107957 conmon[1660]: conmon 39418ba2ee69cd53b6ed <ninfo>: container 1685 exited with status 1
	Sep 16 10:42:59 ha-107957 crio[680]: time="2024-09-16 10:42:59.600240388Z" level=info msg="Removing container: b62fb4f3f2277332b423e6bc67e1c7d3d292187d517a8880811560940a7ca203" id=ea1229ab-5e11-4d0d-acbe-bc2857b79da3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:42:59 ha-107957 crio[680]: time="2024-09-16 10:42:59.616562226Z" level=info msg="Removed container b62fb4f3f2277332b423e6bc67e1c7d3d292187d517a8880811560940a7ca203: kube-system/kube-controller-manager-ha-107957/kube-controller-manager" id=ea1229ab-5e11-4d0d-acbe-bc2857b79da3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:43:16 ha-107957 conmon[1444]: conmon ae0e18d6bb34036a1848 <ninfo>: container 1467 exited with status 1
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.634881649Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=04a75d1b-a335-4321-b684-15d3cd6bfd62 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.635112997Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=04a75d1b-a335-4321-b684-15d3cd6bfd62 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.635795479Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=90e07a1b-9185-41fb-9360-a72f128ef950 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.636028822Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=90e07a1b-9185-41fb-9360-a72f128ef950 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.636752438Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=818d20c2-2597-49e4-b45e-4e03b05f5675 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.636852553Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.649978488Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6e845d6e76532a99f82bce5ccdc321f4e7794d225a74c2ee39f051c762e054c1/merged/etc/passwd: no such file or directory"
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.650035071Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6e845d6e76532a99f82bce5ccdc321f4e7794d225a74c2ee39f051c762e054c1/merged/etc/group: no such file or directory"
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.687009905Z" level=info msg="Created container b3e2e0ca189ccaf66f34015510803cc965f57ad799e60cecbb389662d8f01662: kube-system/storage-provisioner/storage-provisioner" id=818d20c2-2597-49e4-b45e-4e03b05f5675 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.687690121Z" level=info msg="Starting container: b3e2e0ca189ccaf66f34015510803cc965f57ad799e60cecbb389662d8f01662" id=0fcdf1ce-2e97-45cd-8db5-51f1afbccee6 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:43:16 ha-107957 crio[680]: time="2024-09-16 10:43:16.693921465Z" level=info msg="Started container" PID=2033 containerID=b3e2e0ca189ccaf66f34015510803cc965f57ad799e60cecbb389662d8f01662 description=kube-system/storage-provisioner/storage-provisioner id=0fcdf1ce-2e97-45cd-8db5-51f1afbccee6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00ac7f2c46071b692a9249a471b84dcc168842fc721109e99c5131ae32afefe3
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.308326390Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=09d48ba5-34e3-496c-95b2-148104566a5d name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.308550212Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748],Size_:89437508,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=09d48ba5-34e3-496c-95b2-148104566a5d name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.309264505Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=0f98c8f4-83d9-4429-ac5a-18be9e3e1869 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.309525492Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748],Size_:89437508,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=0f98c8f4-83d9-4429-ac5a-18be9e3e1869 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.310262185Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-107957/kube-controller-manager" id=5e3e3543-489c-4dd4-905d-2f72b4867678 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.310350193Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.379258030Z" level=info msg="Created container f4660fca9c01c38f993297361261cc15e7df18e02f94dca0d8ac816cf584c1ec: kube-system/kube-controller-manager-ha-107957/kube-controller-manager" id=5e3e3543-489c-4dd4-905d-2f72b4867678 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.379971440Z" level=info msg="Starting container: f4660fca9c01c38f993297361261cc15e7df18e02f94dca0d8ac816cf584c1ec" id=83446292-d67c-4625-bb19-3e182d87e4ab name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:43:19 ha-107957 crio[680]: time="2024-09-16 10:43:19.386948236Z" level=info msg="Started container" PID=2078 containerID=f4660fca9c01c38f993297361261cc15e7df18e02f94dca0d8ac816cf584c1ec description=kube-system/kube-controller-manager-ha-107957/kube-controller-manager id=83446292-d67c-4625-bb19-3e182d87e4ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef2c25480d442608270277214ef617602e2aae4331d7b7a3cad46782055194f1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f4660fca9c01c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Running             kube-controller-manager   4                   ef2c25480d442       kube-controller-manager-ha-107957
	b3e2e0ca189cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Running             storage-provisioner       3                   00ac7f2c46071       storage-provisioner
	c65b6383645d7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   2 minutes ago        Running             kube-apiserver            2                   549290fd9da05       kube-apiserver-ha-107957
	707e3f5e6cf5f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 minutes ago        Running             coredns                   1                   a71d6b745d7d7       coredns-7c65d6cfc9-mhp28
	a6776c0a5cffd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   2 minutes ago        Running             busybox                   1                   cadc8cc1ef465       busybox-7dff88458-m2jh6
	1ecd5a9c6f117       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   2 minutes ago        Running             kindnet-cni               1                   a530b4e250ccf       kindnet-rwcs2
	39418ba2ee69c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   2 minutes ago        Exited              kube-controller-manager   3                   ef2c25480d442       kube-controller-manager-ha-107957
	5b39b844ec63f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   2 minutes ago        Running             kube-proxy                1                   7f089cb0a4be1       kube-proxy-5ctr8
	ae0e18d6bb340       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 minutes ago        Exited              storage-provisioner       2                   00ac7f2c46071       storage-provisioner
	ec3ca649ae360       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 minutes ago        Running             coredns                   1                   5ecf425dc4f1a       coredns-7c65d6cfc9-t9xdr
	1dd517b33737f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   3 minutes ago        Running             kube-scheduler            1                   db1ccfb4ef4ae       kube-scheduler-ha-107957
	fdf46ba4e0cbd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   3 minutes ago        Running             etcd                      1                   09f8f07b93284       etcd-ha-107957
	e0c3235d93b86       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   3 minutes ago        Exited              kube-apiserver            1                   549290fd9da05       kube-apiserver-ha-107957
	1726b85ca7959       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   3 minutes ago        Running             kube-vip                  0                   e2d95535fe24c       kube-vip-ha-107957
	
	
	==> coredns [707e3f5e6cf5f44178aefb4608fa868a0fdd02ae4549148e36613f6c3daea736] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56646 - 57258 "HINFO IN 9196155102493337332.189722933387082711. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01294144s
	
	
	==> coredns [ec3ca649ae3600b62c32d49ff225ba649381a8411daa8da337783b86a2d34c83] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35881 - 52691 "HINFO IN 4398762509095049815.5941857700825401831. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008094361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[550081783]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:42:45.390) (total time: 30000ms):
	Trace[550081783]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:43:15.390)
	Trace[550081783]: [30.000911823s] [30.000911823s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1244131585]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:42:45.390) (total time: 30001ms):
	Trace[1244131585]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:43:15.390)
	Trace[1244131585]: [30.001002069s] [30.001002069s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1773993343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:42:45.390) (total time: 30001ms):
	Trace[1773993343]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:43:15.390)
	Trace[1773993343]: [30.001126531s] [30.001126531s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-107957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:44:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:42:26 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:42:26 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:42:26 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:42:26 +0000   Mon, 16 Sep 2024 10:42:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-107957
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb6a255d792347a7a70ab567e3691177
	  System UUID:                4b3cbb31-41b2-4aeb-852f-1a17b0b6a69f
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m2jh6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 coredns-7c65d6cfc9-mhp28             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m41s
	  kube-system                 coredns-7c65d6cfc9-t9xdr             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m41s
	  kube-system                 etcd-ha-107957                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m46s
	  kube-system                 kindnet-rwcs2                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m41s
	  kube-system                 kube-apiserver-ha-107957             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-controller-manager-ha-107957    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-proxy-5ctr8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-scheduler-ha-107957             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-vip-ha-107957                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m39s                  kube-proxy       
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 7m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m46s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m46s                  kubelet          Node ha-107957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m46s                  kubelet          Node ha-107957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m46s                  kubelet          Node ha-107957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m42s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   NodeReady                7m30s                  kubelet          Node ha-107957 status is now: NodeReady
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           6m17s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           3m49s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Warning  CgroupV1                 3m17s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m17s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m17s (x8 over 3m17s)  kubelet          Node ha-107957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m17s (x8 over 3m17s)  kubelet          Node ha-107957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m17s (x7 over 3m17s)  kubelet          Node ha-107957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m47s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           104s                   node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	
	
	Name:               ha-107957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:45:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:38:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-107957-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 7147ff1d91514f98b425c83da9dd1da6
	  System UUID:                15471af5-ad40-4515-bf0c-79f0cc3f164e
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-plmdj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 etcd-ha-107957-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m27s
	  kube-system                 kindnet-sjkjx                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m28s
	  kube-system                 kube-apiserver-ha-107957-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-controller-manager-ha-107957-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-proxy-qtxh9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-scheduler-ha-107957-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-vip-ha-107957-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m37s                  kube-proxy       
	  Normal   Starting                 4m5s                   kube-proxy       
	  Normal   Starting                 7m25s                  kube-proxy       
	  Normal   NodeHasSufficientPID     7m28s (x7 over 7m28s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  7m28s (x8 over 7m28s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m28s (x8 over 7m28s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m27s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           6m17s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   NodeHasSufficientPID     4m24s (x7 over 4m24s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m24s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m24s (x8 over 4m24s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m24s (x8 over 4m24s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           3m49s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   NodeHasSufficientMemory  3m16s (x8 over 3m16s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 3m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m16s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    3m16s (x8 over 3m16s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m16s (x7 over 3m16s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m47s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           104s                   node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	
	
	Name:               ha-107957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:45:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:45:02 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:45:02 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:45:02 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:45:02 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-107957-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 06348169b7144f188cc318225419717b
	  System UUID:                85f6a07b-6b9f-43fc-98ae-305e46935522
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5jwbv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kindnet-4lkzl              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m15s
	  kube-system                 kube-proxy-hm8zn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14s                    kube-proxy       
	  Normal   Starting                 5m13s                  kube-proxy       
	  Warning  CgroupV1                 5m15s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           5m15s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   Starting                 5m15s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m15s (x2 over 5m15s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m15s (x2 over 5m15s)  kubelet          Node ha-107957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m15s (x2 over 5m15s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m12s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           5m12s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   NodeReady                5m3s                   kubelet          Node ha-107957-m04 status is now: NodeReady
	  Normal   RegisteredNode           3m49s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           2m47s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   NodeNotReady             2m6s                   node-controller  Node ha-107957-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           104s                   node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   Starting                 36s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     30s (x7 over 36s)      kubelet          Node ha-107957-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  24s (x8 over 36s)      kubelet          Node ha-107957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 36s)      kubelet          Node ha-107957-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[ +14.884982] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep16 10:42] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000003] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000001] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +1.026832] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000009] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004024] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +2.011846] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +4.255692] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000008] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[Sep16 10:43] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-1162a04f8fb0
	[  +0.000004] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000025] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [fdf46ba4e0cbd100b2458ee180951734b69e24732ffe661bf342f56ac17797fc] <==
	{"level":"warn","ts":"2024-09-16T10:44:02.517318Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"15aadc1eb541585","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-16T10:44:06.116505Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:06.116563Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:06.117429Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:06.122064Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"15aadc1eb541585","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:44:06.122189Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:06.127829Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"15aadc1eb541585","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:44:06.127921Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:56.306261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 12748002774085638657)"}
	{"level":"info","ts":"2024-09-16T10:44:56.307874Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"15aadc1eb541585","removed-remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:44:56.307925Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"15aadc1eb541585"}
	{"level":"warn","ts":"2024-09-16T10:44:56.308211Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:56.308295Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"15aadc1eb541585"}
	{"level":"warn","ts":"2024-09-16T10:44:56.308582Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:56.308615Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:56.308654Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"warn","ts":"2024-09-16T10:44:56.308774Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585","error":"context canceled"}
	{"level":"warn","ts":"2024-09-16T10:44:56.308807Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"15aadc1eb541585","error":"failed to read 15aadc1eb541585 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-16T10:44:56.308826Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"warn","ts":"2024-09-16T10:44:56.308921Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585","error":"context canceled"}
	{"level":"info","ts":"2024-09-16T10:44:56.308950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:56.308964Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"15aadc1eb541585"}
	{"level":"info","ts":"2024-09-16T10:44:56.308980Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"15aadc1eb541585"}
	{"level":"warn","ts":"2024-09-16T10:44:56.316764Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"15aadc1eb541585"}
	{"level":"warn","ts":"2024-09-16T10:44:56.319202Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"15aadc1eb541585"}
	
	
	==> kernel <==
	 10:45:05 up 27 min,  0 users,  load average: 0.25, 0.61, 0.52
	Linux ha-107957 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [1ecd5a9c6f117a1d7664e65f07cd8cbbcf295737acbeac64830a4172ae868384] <==
	I0916 10:44:27.997676       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:44:38.001449       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:44:38.001495       1 main.go:299] handling current node
	I0916 10:44:38.001513       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:44:38.001520       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:44:38.001651       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:44:38.001661       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:44:38.001704       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:44:38.001712       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:44:47.994559       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:44:47.994604       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:44:47.994748       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:44:47.994753       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:44:47.994798       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:44:47.994802       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:44:47.994837       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:44:47.994846       1 main.go:299] handling current node
	I0916 10:44:57.994764       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:44:57.994798       1 main.go:299] handling current node
	I0916 10:44:57.994816       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:44:57.994835       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:44:57.994945       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:44:57.994953       1 main.go:322] Node ha-107957-m03 has CIDR [10.244.2.0/24] 
	I0916 10:44:57.994990       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:44:57.994997       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c65b6383645d76fddc479ceaadaddda3120f7f3d93c48ec79978fba8f4bea0c5] <==
	I0916 10:42:59.003012       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0916 10:42:59.003052       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0916 10:42:59.003090       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0916 10:42:59.096982       1 controller.go:142] Starting OpenAPI controller
	I0916 10:42:59.195887       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:42:59.206451       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:42:59.206590       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:42:59.206741       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:42:59.206792       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:42:59.206954       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:42:59.207021       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:42:59.207093       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:42:59.207131       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:42:59.236377       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:42:59.236991       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:42:59.237633       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:42:59.293663       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:42:59.293807       1 policy_source.go:224] refreshing policies
	I0916 10:42:59.293778       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:42:59.297263       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:42:59.312605       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:42:59.940008       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:43:00.226630       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0916 10:43:00.228053       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:43:00.235359       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e0c3235d93b8611b7cb5e1f0197a91afcb1082e50ecdbf86376b7f4d3dcc8490] <==
	E0916 10:42:14.120704       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.120716       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.120719       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.120725       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.120783       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.120891       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.120916       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.121101       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.121120       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.121136       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.121200       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:42:14.193735       1 watcher.go:342] watch chan error: etcdserver: no leader
	I0916 10:42:14.211166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:42:14.245879       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:42:14.251643       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:42:14.251661       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:42:14.251665       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:42:14.251730       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:42:14.251810       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:42:14.256465       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:42:14.294846       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:42:14.305234       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:42:14.305257       1 policy_source.go:224] refreshing policies
	I0916 10:42:14.387004       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	F0916 10:42:57.252095       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [39418ba2ee69cd53b6edf1336165dc7a63a15e8749a33be700788131fffd8624] <==
	I0916 10:42:47.673700       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:42:48.183723       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:42:48.183753       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:42:48.185009       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:42:48.185009       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:42:48.185232       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:42:48.185355       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:42:58.995263       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [f4660fca9c01c38f993297361261cc15e7df18e02f94dca0d8ac816cf584c1ec] <==
	I0916 10:44:41.984844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:44:41.984915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-107957-m04"
	I0916 10:44:41.998774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:44:44.339401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	I0916 10:44:53.042456       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m03"
	I0916 10:44:53.056366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m03"
	I0916 10:44:53.120582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.776636ms"
	I0916 10:44:53.222164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.507072ms"
	I0916 10:44:53.233156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.847108ms"
	I0916 10:44:53.233664       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.769µs"
	I0916 10:44:53.255211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.169321ms"
	I0916 10:44:53.255587       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.144µs"
	I0916 10:44:55.171472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.133µs"
	I0916 10:44:55.831603       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.111µs"
	I0916 10:44:55.835964       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.836µs"
	I0916 10:44:57.297644       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.894913ms"
	I0916 10:44:57.297751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.688µs"
	I0916 10:44:59.421923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-107957-m04"
	I0916 10:44:59.421967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m03"
	E0916 10:45:01.539736       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:45:01.539772       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:45:01.539781       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:45:01.539788       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:45:01.539794       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	I0916 10:45:02.404156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-107957-m04"
	
	
	==> kube-proxy [5b39b844ec63f2ae1f0a21ebbfec09b3270c519719d8db3f1c9c8022f97087c2] <==
	I0916 10:42:46.434336       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:42:46.534019       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:42:46.534087       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:42:46.552768       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:42:46.552835       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:42:46.554685       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:42:46.555024       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:42:46.555072       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:42:46.556136       1 config.go:199] "Starting service config controller"
	I0916 10:42:46.556212       1 config.go:328] "Starting node config controller"
	I0916 10:42:46.556255       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:42:46.556146       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:42:46.556671       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:42:46.556756       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:42:46.656509       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:42:46.657395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:42:46.657439       1 shared_informer.go:320] Caches are synced for service config
	E0916 10:42:59.205994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.254:57502->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:59.206107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.254:57488->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:59.206194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io) - error from a previous attempt: read tcp 192.168.49.254:57506->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kube-scheduler [1dd517b33737f98a01f54d81467fb5bd51e6217c617c48fe093131061732e565] <==
	W0916 10:42:06.932620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:42:06.932666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:42:07.395490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:42:07.395609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:42:08.095060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:42:08.095120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:42:12.495649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:42:12.495818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:42:12.795036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:42:12.795092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:42:14.903043       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:42:58.958857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:34510->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.958963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:34540->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.959085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:34450->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.959202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:34534->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.959280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:34496->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.959345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:34444->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.959410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:34592->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.994077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:34442->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.994245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:34566->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.994330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:34466->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.997598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:34582->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.997788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:34526->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.997860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:34542->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0916 10:42:58.997950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:34554->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 16 10:43:08 ha-107957 kubelet[837]: E0916 10:43:08.327256     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483388327009420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:16 ha-107957 kubelet[837]: I0916 10:43:16.634456     837 scope.go:117] "RemoveContainer" containerID="ae0e18d6bb34036a1848919561789242149f1e693219472f054f04588d0e7af9"
	Sep 16 10:43:18 ha-107957 kubelet[837]: E0916 10:43:18.328278     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483398328087259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:18 ha-107957 kubelet[837]: E0916 10:43:18.328321     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483398328087259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:19 ha-107957 kubelet[837]: I0916 10:43:19.307780     837 scope.go:117] "RemoveContainer" containerID="39418ba2ee69cd53b6edf1336165dc7a63a15e8749a33be700788131fffd8624"
	Sep 16 10:43:28 ha-107957 kubelet[837]: E0916 10:43:28.329329     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483408329142699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:28 ha-107957 kubelet[837]: E0916 10:43:28.329393     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483408329142699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:38 ha-107957 kubelet[837]: E0916 10:43:38.330451     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483418330265351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:38 ha-107957 kubelet[837]: E0916 10:43:38.330495     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483418330265351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:48 ha-107957 kubelet[837]: E0916 10:43:48.331693     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483428331432668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:48 ha-107957 kubelet[837]: E0916 10:43:48.331739     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483428331432668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:58 ha-107957 kubelet[837]: E0916 10:43:58.332862     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483438332661636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:58 ha-107957 kubelet[837]: E0916 10:43:58.332908     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483438332661636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:08 ha-107957 kubelet[837]: E0916 10:44:08.333920     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483448333746812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:08 ha-107957 kubelet[837]: E0916 10:44:08.333966     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483448333746812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:18 ha-107957 kubelet[837]: E0916 10:44:18.335002     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483458334822448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:18 ha-107957 kubelet[837]: E0916 10:44:18.335048     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483458334822448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:28 ha-107957 kubelet[837]: E0916 10:44:28.336151     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483468335980711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:28 ha-107957 kubelet[837]: E0916 10:44:28.336193     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483468335980711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:38 ha-107957 kubelet[837]: E0916 10:44:38.337343     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483478337104482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:38 ha-107957 kubelet[837]: E0916 10:44:38.337384     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483478337104482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:48 ha-107957 kubelet[837]: E0916 10:44:48.338296     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483488338120426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:48 ha-107957 kubelet[837]: E0916 10:44:48.338340     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483488338120426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:58 ha-107957 kubelet[837]: E0916 10:44:58.339347     837 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483498339185033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:58 ha-107957 kubelet[837]: E0916 10:44:58.339386     837 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483498339185033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-107957 -n ha-107957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (580.008µs)
helpers_test.go:263: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (13.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-107957 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0916 10:46:06.692306   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-107957 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.515485563s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:584: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (514.749µs)
ha_test.go:586: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-107957
helpers_test.go:235: (dbg) docker inspect ha-107957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd",
	        "Created": "2024-09-16T10:37:05.006225665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:45:42.880084562Z",
	            "FinishedAt": "2024-09-16T10:45:42.05102847Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/hosts",
	        "LogPath": "/var/lib/docker/containers/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd-json.log",
	        "Name": "/ha-107957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-107957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-107957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37e28b226691bb0f189e219ba7ae0c9b8430430b6e6e47094792a78d8c8076b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-107957",
	                "Source": "/var/lib/docker/volumes/ha-107957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-107957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-107957",
	                "name.minikube.sigs.k8s.io": "ha-107957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57fcbc72e7217e5156771ecde94de19948f6e074ba9fc92a166432ac8e9bdbf3",
	            "SandboxKey": "/var/run/docker/netns/57fcbc72e721",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-107957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1162a04f8fb0eca4f56c515332b1b6b72501106e380521da303a5999505b78f5",
	                    "EndpointID": "763789f2b9ae3f46b54077e39a71cd25ed0850f2f6b4a7476eaea5c9f7d33264",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-107957",
	                        "8934c54a2cf0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-107957 -n ha-107957
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 logs -n 25: (1.217930757s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04:/home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m04 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp testdata/cp-test.txt                                               | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile432092999/001/cp-test_ha-107957-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957:/home/docker/cp-test_ha-107957-m04_ha-107957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957 sudo cat                                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957.txt                                |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m02:/home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m02 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m03:/home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n                                                                | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | ha-107957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-107957 ssh -n ha-107957-m03 sudo cat                                         | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-107957 node stop m02 -v=7                                                    | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-107957 node start m02 -v=7                                                   | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:41 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-107957 -v=7                                                          | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-107957 -v=7                                                               | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-107957 --wait=true -v=7                                                   | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-107957                                                               | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC |                     |
	| node    | ha-107957 node delete m03 -v=7                                                  | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-107957 stop -v=7                                                             | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC | 16 Sep 24 10:45 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-107957 --wait=true                                                        | ha-107957 | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC | 16 Sep 24 10:47 UTC |
	|         | -v=7 --alsologtostderr                                                          |           |         |         |                     |                     |
	|         | --driver=docker                                                                 |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                        |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:45:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:45:42.508364   98140 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:42.508476   98140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:42.508485   98140 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:42.508490   98140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:42.508687   98140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:45:42.509224   98140 out.go:352] Setting JSON to false
	I0916 10:45:42.510189   98140 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1683,"bootTime":1726481860,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:45:42.510288   98140 start.go:139] virtualization: kvm guest
	I0916 10:45:42.512864   98140 out.go:177] * [ha-107957] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:45:42.514475   98140 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:45:42.514505   98140 notify.go:220] Checking for updates...
	I0916 10:45:42.517445   98140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:45:42.518863   98140 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:45:42.521245   98140 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:45:42.522846   98140 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:45:42.524330   98140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:45:42.526104   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:42.526695   98140 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:45:42.550323   98140 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:45:42.550401   98140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:45:42.602602   98140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:45:42.593257662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:45:42.602719   98140 docker.go:318] overlay module found
	I0916 10:45:42.604686   98140 out.go:177] * Using the docker driver based on existing profile
	I0916 10:45:42.606091   98140 start.go:297] selected driver: docker
	I0916 10:45:42.606108   98140 start.go:901] validating driver "docker" against &{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:45:42.606243   98140 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:45:42.606337   98140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:45:42.656458   98140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:45:42.64569512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:45:42.657439   98140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:45:42.657481   98140 cni.go:84] Creating CNI manager for ""
	I0916 10:45:42.657519   98140 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:45:42.657595   98140 start.go:340] cluster config:
	{Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0916 10:45:42.659922   98140 out.go:177] * Starting "ha-107957" primary control-plane node in "ha-107957" cluster
	I0916 10:45:42.661365   98140 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:45:42.662849   98140 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:45:42.664154   98140 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:45:42.664201   98140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:45:42.664213   98140 cache.go:56] Caching tarball of preloaded images
	I0916 10:45:42.664222   98140 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:45:42.664345   98140 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:45:42.664361   98140 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:45:42.664534   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:45:42.687471   98140 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:45:42.687489   98140 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:45:42.687555   98140 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:45:42.687566   98140 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:45:42.687570   98140 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:45:42.687577   98140 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:45:42.687583   98140 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:45:42.688778   98140 image.go:273] response: 
	I0916 10:45:42.746615   98140 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:45:42.746678   98140 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:45:42.746725   98140 start.go:360] acquireMachinesLock for ha-107957: {Name:mkd47d2ce5dbb0c6b4cd5ea9479cc8820c855026 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:45:42.746794   98140 start.go:364] duration metric: took 46.371µs to acquireMachinesLock for "ha-107957"
	I0916 10:45:42.746812   98140 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:45:42.746816   98140 fix.go:54] fixHost starting: 
	I0916 10:45:42.747022   98140 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:45:42.763969   98140 fix.go:112] recreateIfNeeded on ha-107957: state=Stopped err=<nil>
	W0916 10:45:42.764003   98140 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:45:42.766308   98140 out.go:177] * Restarting existing docker container for "ha-107957" ...
	I0916 10:45:42.768079   98140 cli_runner.go:164] Run: docker start ha-107957
	I0916 10:45:43.052381   98140 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:45:43.071478   98140 kic.go:430] container "ha-107957" state is running.
	I0916 10:45:43.071828   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:45:43.091009   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:45:43.091345   98140 machine.go:93] provisionDockerMachine start ...
	I0916 10:45:43.091426   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:43.109984   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:43.110194   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0916 10:45:43.110209   98140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:45:43.110812   98140 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54406->127.0.0.1:32828: read: connection reset by peer
	I0916 10:45:46.245164   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:45:46.245195   98140 ubuntu.go:169] provisioning hostname "ha-107957"
	I0916 10:45:46.245244   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:46.262760   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:46.262967   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0916 10:45:46.262982   98140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957 && echo "ha-107957" | sudo tee /etc/hostname
	I0916 10:45:46.405280   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957
	
	I0916 10:45:46.405384   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:46.422581   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:46.422761   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0916 10:45:46.422778   98140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:45:46.553769   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:45:46.553808   98140 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:45:46.553863   98140 ubuntu.go:177] setting up certificates
	I0916 10:45:46.553876   98140 provision.go:84] configureAuth start
	I0916 10:45:46.553945   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:45:46.571318   98140 provision.go:143] copyHostCerts
	I0916 10:45:46.571356   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:45:46.571383   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:45:46.571393   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:45:46.571458   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:45:46.571538   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:45:46.571556   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:45:46.571560   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:45:46.571591   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:45:46.571636   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:45:46.571652   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:45:46.571658   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:45:46.571678   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:45:46.571775   98140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957 san=[127.0.0.1 192.168.49.2 ha-107957 localhost minikube]
	I0916 10:45:46.680509   98140 provision.go:177] copyRemoteCerts
	I0916 10:45:46.680580   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:45:46.680620   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:46.698811   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:45:46.794889   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:45:46.794976   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:45:46.818184   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:45:46.818255   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:45:46.840746   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:45:46.840801   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:45:46.864075   98140 provision.go:87] duration metric: took 310.184354ms to configureAuth
	I0916 10:45:46.864106   98140 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:45:46.864314   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:46.864422   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:46.882348   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:46.882542   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0916 10:45:46.882565   98140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:45:47.232144   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:45:47.232171   98140 machine.go:96] duration metric: took 4.140804206s to provisionDockerMachine
	I0916 10:45:47.232185   98140 start.go:293] postStartSetup for "ha-107957" (driver="docker")
	I0916 10:45:47.232199   98140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:45:47.232261   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:45:47.232317   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:47.251815   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:45:47.345780   98140 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:45:47.348866   98140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:45:47.348907   98140 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:45:47.348918   98140 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:45:47.348925   98140 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:45:47.348940   98140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:45:47.349003   98140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:45:47.349180   98140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:45:47.349200   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:45:47.349327   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:45:47.357503   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:45:47.379345   98140 start.go:296] duration metric: took 147.144589ms for postStartSetup
	I0916 10:45:47.379418   98140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:47.379448   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:47.397387   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:45:47.490181   98140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:45:47.494396   98140 fix.go:56] duration metric: took 4.747568098s for fixHost
	I0916 10:45:47.494420   98140 start.go:83] releasing machines lock for "ha-107957", held for 4.747616219s
	I0916 10:45:47.494474   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:45:47.511861   98140 ssh_runner.go:195] Run: cat /version.json
	I0916 10:45:47.511908   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:47.511922   98140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:45:47.511990   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:47.530353   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:45:47.531199   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:45:47.697992   98140 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:47.702305   98140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:45:47.841518   98140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:45:47.846004   98140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:45:47.854345   98140 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:45:47.854416   98140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:45:47.862851   98140 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:45:47.862874   98140 start.go:495] detecting cgroup driver to use...
	I0916 10:45:47.862905   98140 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:45:47.862940   98140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:45:47.873513   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:45:47.883333   98140 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:45:47.883384   98140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:45:47.895190   98140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:45:47.905779   98140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:45:47.987633   98140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:45:48.062220   98140 docker.go:233] disabling docker service ...
	I0916 10:45:48.062288   98140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:45:48.073840   98140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:45:48.084567   98140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:45:48.167903   98140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:45:48.247545   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:45:48.258019   98140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:45:48.272602   98140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:45:48.272657   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:48.282259   98140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:45:48.282326   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:48.291970   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:48.301436   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:48.310719   98140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:45:48.319128   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:48.328341   98140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:48.337246   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:48.346550   98140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:45:48.354545   98140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:45:48.362375   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:48.438815   98140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:45:48.536131   98140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:45:48.536195   98140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:45:48.539726   98140 start.go:563] Will wait 60s for crictl version
	I0916 10:45:48.539791   98140 ssh_runner.go:195] Run: which crictl
	I0916 10:45:48.542958   98140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:45:48.576023   98140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:45:48.576141   98140 ssh_runner.go:195] Run: crio --version
	I0916 10:45:48.609138   98140 ssh_runner.go:195] Run: crio --version
	I0916 10:45:48.645076   98140 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:45:48.646354   98140 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:45:48.663254   98140 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:45:48.667254   98140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:48.677634   98140 kubeadm.go:883] updating cluster {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:45:48.677769   98140 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:45:48.677815   98140 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:45:48.718678   98140 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:45:48.718698   98140 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:45:48.718741   98140 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:45:48.751934   98140 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:45:48.751956   98140 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:45:48.751964   98140 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0916 10:45:48.752052   98140 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:45:48.752119   98140 ssh_runner.go:195] Run: crio config
	I0916 10:45:48.793445   98140 cni.go:84] Creating CNI manager for ""
	I0916 10:45:48.793468   98140 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:45:48.793477   98140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:45:48.793495   98140 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-107957 NodeName:ha-107957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:45:48.793658   98140 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-107957"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:45:48.793679   98140 kube-vip.go:115] generating kube-vip config ...
	I0916 10:45:48.793722   98140 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:45:48.805147   98140 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:48.805240   98140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:45:48.805285   98140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:45:48.813444   98140 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:45:48.813508   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:45:48.821588   98140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0916 10:45:48.838264   98140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:45:48.855314   98140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0916 10:45:48.871725   98140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:45:48.888361   98140 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:45:48.891756   98140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:48.901422   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:48.979741   98140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:45:48.992274   98140 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.2
	I0916 10:45:48.992300   98140 certs.go:194] generating shared ca certs ...
	I0916 10:45:48.992316   98140 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:48.992455   98140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:45:48.992491   98140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:45:48.992501   98140 certs.go:256] generating profile certs ...
	I0916 10:45:48.992565   98140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:45:48.992612   98140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.59a829ae
	I0916 10:45:48.992627   98140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.59a829ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 10:45:49.296041   98140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.59a829ae ...
	I0916 10:45:49.296075   98140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.59a829ae: {Name:mke12bf1a99efb32904ea199119df36229b81d44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:49.296247   98140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.59a829ae ...
	I0916 10:45:49.296259   98140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.59a829ae: {Name:mkf119838eff8428ed15258e5c23237de7b753ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:49.296329   98140 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt.59a829ae -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt
	I0916 10:45:49.296488   98140 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.59a829ae -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key
	I0916 10:45:49.296614   98140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:45:49.296628   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:45:49.296645   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:45:49.296661   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:45:49.296671   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:45:49.296680   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:45:49.296691   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:45:49.296708   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:45:49.296720   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:45:49.296769   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:45:49.296796   98140 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:45:49.296806   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:45:49.296827   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:45:49.296848   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:45:49.296871   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:45:49.296905   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:45:49.296938   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:45:49.296953   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:49.296966   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:45:49.297578   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:45:49.319674   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:45:49.340540   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:45:49.361885   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:45:49.383801   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:45:49.406118   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:45:49.427025   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:45:49.449064   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:45:49.470134   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:45:49.491121   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:45:49.513379   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:45:49.534825   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:45:49.550892   98140 ssh_runner.go:195] Run: openssl version
	I0916 10:45:49.555959   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:45:49.564726   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:45:49.568004   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:45:49.568062   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:45:49.574297   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:45:49.582257   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:45:49.591012   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:49.594486   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:49.594544   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:49.600756   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:45:49.608726   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:45:49.617131   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:45:49.620636   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:45:49.620695   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:45:49.626927   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:45:49.635080   98140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:45:49.638321   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:45:49.644356   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:45:49.650682   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:45:49.656739   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:45:49.662820   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:45:49.668948   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:45:49.674969   98140 kubeadm.go:392] StartCluster: {Name:ha-107957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:45:49.675095   98140 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:45:49.675152   98140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:45:49.707943   98140 cri.go:89] found id: ""
	I0916 10:45:49.708003   98140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:45:49.716573   98140 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:45:49.716595   98140 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:45:49.716646   98140 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:45:49.724509   98140 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:49.724954   98140 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-107957" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:45:49.725093   98140 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "ha-107957" cluster setting kubeconfig missing "ha-107957" context setting]
	I0916 10:45:49.725420   98140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:49.725819   98140 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:45:49.726055   98140 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:45:49.726512   98140 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:45:49.726673   98140 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:45:49.735468   98140 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:45:49.735489   98140 kubeadm.go:597] duration metric: took 18.8886ms to restartPrimaryControlPlane
	I0916 10:45:49.735498   98140 kubeadm.go:394] duration metric: took 60.541478ms to StartCluster
	I0916 10:45:49.735516   98140 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:49.735575   98140 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:45:49.736158   98140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:49.736363   98140 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:45:49.736385   98140 start.go:241] waiting for startup goroutines ...
	I0916 10:45:49.736392   98140 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:45:49.736583   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:49.740044   98140 out.go:177] * Enabled addons: 
	I0916 10:45:49.741605   98140 addons.go:510] duration metric: took 5.205647ms for enable addons: enabled=[]
	I0916 10:45:49.741646   98140 start.go:246] waiting for cluster config update ...
	I0916 10:45:49.741658   98140 start.go:255] writing updated cluster config ...
	I0916 10:45:49.743486   98140 out.go:201] 
	I0916 10:45:49.745402   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:49.745523   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:45:49.747322   98140 out.go:177] * Starting "ha-107957-m02" control-plane node in "ha-107957" cluster
	I0916 10:45:49.748671   98140 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:45:49.750073   98140 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:45:49.751347   98140 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:45:49.751378   98140 cache.go:56] Caching tarball of preloaded images
	I0916 10:45:49.751379   98140 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:45:49.751484   98140 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:45:49.751497   98140 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:45:49.751615   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:45:49.771185   98140 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:45:49.771203   98140 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:45:49.771277   98140 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:45:49.771297   98140 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:45:49.771303   98140 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:45:49.771310   98140 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:45:49.771316   98140 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:45:49.772374   98140 image.go:273] response: 
	I0916 10:45:49.829280   98140 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:45:49.829320   98140 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:45:49.829376   98140 start.go:360] acquireMachinesLock for ha-107957-m02: {Name:mkbd1a70c826dc0de88173dfa3a4a79ea68a23fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:45:49.829461   98140 start.go:364] duration metric: took 59.343µs to acquireMachinesLock for "ha-107957-m02"
	I0916 10:45:49.829485   98140 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:45:49.829491   98140 fix.go:54] fixHost starting: m02
	I0916 10:45:49.829794   98140 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:45:49.851366   98140 fix.go:112] recreateIfNeeded on ha-107957-m02: state=Stopped err=<nil>
	W0916 10:45:49.851394   98140 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:45:49.854027   98140 out.go:177] * Restarting existing docker container for "ha-107957-m02" ...
	I0916 10:45:49.855565   98140 cli_runner.go:164] Run: docker start ha-107957-m02
	I0916 10:45:50.127872   98140 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:45:50.146414   98140 kic.go:430] container "ha-107957-m02" state is running.
	I0916 10:45:50.146838   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:45:50.165875   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:45:50.166199   98140 machine.go:93] provisionDockerMachine start ...
	I0916 10:45:50.166269   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:50.184221   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:50.184393   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0916 10:45:50.184407   98140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:45:50.185093   98140 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50792->127.0.0.1:32833: read: connection reset by peer
	I0916 10:45:53.316795   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:45:53.316819   98140 ubuntu.go:169] provisioning hostname "ha-107957-m02"
	I0916 10:45:53.316876   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:53.334323   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:53.334494   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0916 10:45:53.334508   98140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m02 && echo "ha-107957-m02" | sudo tee /etc/hostname
	I0916 10:45:53.476140   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m02
	
	I0916 10:45:53.476210   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:53.493978   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:53.494165   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0916 10:45:53.494181   98140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:45:53.625541   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:45:53.625576   98140 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:45:53.625598   98140 ubuntu.go:177] setting up certificates
	I0916 10:45:53.625612   98140 provision.go:84] configureAuth start
	I0916 10:45:53.625686   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:45:53.643374   98140 provision.go:143] copyHostCerts
	I0916 10:45:53.643419   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:45:53.643450   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:45:53.643456   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:45:53.643533   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:45:53.643624   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:45:53.643643   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:45:53.643650   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:45:53.643693   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:45:53.643754   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:45:53.643780   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:45:53.643787   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:45:53.643817   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:45:53.643885   98140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m02 san=[127.0.0.1 192.168.49.3 ha-107957-m02 localhost minikube]
	I0916 10:45:53.917196   98140 provision.go:177] copyRemoteCerts
	I0916 10:45:53.917258   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:45:53.917290   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:53.935283   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:45:54.034567   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:45:54.034628   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:45:54.055835   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:45:54.055906   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:45:54.077244   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:45:54.077324   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:45:54.099290   98140 provision.go:87] duration metric: took 473.66112ms to configureAuth
	I0916 10:45:54.099323   98140 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:45:54.099522   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:54.099618   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:54.117483   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:54.117651   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0916 10:45:54.117668   98140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:45:54.457631   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:45:54.457658   98140 machine.go:96] duration metric: took 4.291441068s to provisionDockerMachine
	I0916 10:45:54.457679   98140 start.go:293] postStartSetup for "ha-107957-m02" (driver="docker")
	I0916 10:45:54.457691   98140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:45:54.457750   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:45:54.457798   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:54.475660   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:45:54.570034   98140 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:45:54.573090   98140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:45:54.573127   98140 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:45:54.573139   98140 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:45:54.573146   98140 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:45:54.573159   98140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:45:54.573222   98140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:45:54.573317   98140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:45:54.573328   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:45:54.573452   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:45:54.581594   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:45:54.603853   98140 start.go:296] duration metric: took 146.158108ms for postStartSetup
	I0916 10:45:54.603934   98140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:54.603971   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:54.621314   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:45:54.714505   98140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:45:54.718867   98140 fix.go:56] duration metric: took 4.889370146s for fixHost
	I0916 10:45:54.718893   98140 start.go:83] releasing machines lock for "ha-107957-m02", held for 4.889419627s
	I0916 10:45:54.718960   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m02
	I0916 10:45:54.737526   98140 out.go:177] * Found network options:
	I0916 10:45:54.738811   98140 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:45:54.739978   98140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:45:54.740014   98140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:45:54.740080   98140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:45:54.740125   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:54.740189   98140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:45:54.740246   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m02
	I0916 10:45:54.758260   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:45:54.758511   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m02/id_rsa Username:docker}
	I0916 10:45:55.103489   98140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:45:55.109168   98140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:45:55.194071   98140 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:45:55.194154   98140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:45:55.205726   98140 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:45:55.205754   98140 start.go:495] detecting cgroup driver to use...
	I0916 10:45:55.205789   98140 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:45:55.205836   98140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:45:55.299929   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:45:55.311532   98140 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:45:55.311590   98140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:45:55.395555   98140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:45:55.408525   98140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:45:55.712772   98140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:45:56.004946   98140 docker.go:233] disabling docker service ...
	I0916 10:45:56.005026   98140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:45:56.037839   98140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:45:56.114656   98140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:45:56.416975   98140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:45:56.640844   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:45:56.704193   98140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:45:56.725230   98140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:45:56.725274   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:56.794194   98140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:45:56.794260   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:56.806684   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:56.819512   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:56.831698   98140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:45:56.894120   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:56.906954   98140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:56.917752   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:45:56.927638   98140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:45:56.936307   98140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:45:56.994008   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:57.170913   98140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:45:57.392922   98140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:45:57.392999   98140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:45:57.396556   98140 start.go:563] Will wait 60s for crictl version
	I0916 10:45:57.396614   98140 ssh_runner.go:195] Run: which crictl
	I0916 10:45:57.399956   98140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:45:57.432710   98140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:45:57.432787   98140 ssh_runner.go:195] Run: crio --version
	I0916 10:45:57.466436   98140 ssh_runner.go:195] Run: crio --version
	I0916 10:45:57.500940   98140 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:45:57.502540   98140 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:45:57.504111   98140 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:45:57.521156   98140 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:45:57.524781   98140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:57.534797   98140 mustload.go:65] Loading cluster: ha-107957
	I0916 10:45:57.535019   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:57.535218   98140 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:45:57.552427   98140 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:45:57.552728   98140 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.3
	I0916 10:45:57.552751   98140 certs.go:194] generating shared ca certs ...
	I0916 10:45:57.552771   98140 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:57.552925   98140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:45:57.552966   98140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:45:57.552976   98140 certs.go:256] generating profile certs ...
	I0916 10:45:57.553037   98140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key
	I0916 10:45:57.553098   98140 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key.f59b195b
	I0916 10:45:57.553131   98140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key
	I0916 10:45:57.553145   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:45:57.553160   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:45:57.553179   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:45:57.553197   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:45:57.553209   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:45:57.553222   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:45:57.553234   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:45:57.553246   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:45:57.553295   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:45:57.553323   98140 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:45:57.553356   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:45:57.553394   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:45:57.553420   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:45:57.553442   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:45:57.553484   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:45:57.553512   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:57.553526   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:45:57.553539   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:45:57.553582   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:45:57.569848   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:45:57.657612   98140 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:45:57.661053   98140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:45:57.673105   98140 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:45:57.676390   98140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:45:57.688067   98140 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:45:57.691210   98140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:45:57.702713   98140 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:45:57.705856   98140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:45:57.717170   98140 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:45:57.720186   98140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:45:57.731610   98140 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:45:57.734896   98140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:45:57.746026   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:45:57.768633   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:45:57.790170   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:45:57.811626   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:45:57.832959   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:45:57.854413   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:45:57.876264   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:45:57.898106   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:45:57.923542   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:45:57.947222   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:45:57.971255   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:45:58.013086   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:45:58.029299   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:45:58.045110   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:45:58.060953   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:45:58.076919   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:45:58.093198   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:45:58.109967   98140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:45:58.127701   98140 ssh_runner.go:195] Run: openssl version
	I0916 10:45:58.132953   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:45:58.142557   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:58.146401   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:58.146476   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:58.153208   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:45:58.162375   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:45:58.171642   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:45:58.175285   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:45:58.175355   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:45:58.181790   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:45:58.190241   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:45:58.199278   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:45:58.202828   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:45:58.202888   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:45:58.209061   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:45:58.217153   98140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:45:58.220459   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:45:58.226663   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:45:58.232811   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:45:58.238786   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:45:58.244800   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:45:58.250639   98140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:45:58.256415   98140 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0916 10:45:58.256523   98140 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:45:58.256553   98140 kube-vip.go:115] generating kube-vip config ...
	I0916 10:45:58.256595   98140 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:45:58.267533   98140 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:58.267600   98140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:45:58.267663   98140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:45:58.275820   98140 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:45:58.275896   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:45:58.284139   98140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:45:58.301062   98140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:45:58.317532   98140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:45:58.334167   98140 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:45:58.337456   98140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:58.347414   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:58.441326   98140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:45:58.452220   98140 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:45:58.452474   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:58.454584   98140 out.go:177] * Verifying Kubernetes components...
	I0916 10:45:58.455889   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:58.547968   98140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:45:58.559611   98140 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:45:58.559916   98140 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:45:58.560001   98140 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:45:58.560245   98140 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:45:58.560343   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:45:58.560354   98140 round_trippers.go:469] Request Headers:
	I0916 10:45:58.560366   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:58.560372   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:08.561694   98140 round_trippers.go:574] Response Status:  in 10001 milliseconds
	I0916 10:46:08.561774   98140 node_ready.go:53] error getting node "ha-107957-m02": Get "https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02": net/http: TLS handshake timeout
	I0916 10:46:08.561861   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:08.561872   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:08.561884   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:08.561892   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.596691   98140 round_trippers.go:574] Response Status: 200 OK in 1034 milliseconds
	I0916 10:46:09.597918   98140 node_ready.go:49] node "ha-107957-m02" has status "Ready":"True"
	I0916 10:46:09.597989   98140 node_ready.go:38] duration metric: took 11.037725168s for node "ha-107957-m02" to be "Ready" ...
	I0916 10:46:09.598012   98140 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:46:09.598086   98140 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:46:09.598117   98140 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:46:09.598207   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:46:09.598229   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.598249   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.598263   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.612809   98140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0916 10:46:09.624237   98140 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.624324   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:09.624333   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.624340   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.624346   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.626525   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:09.627195   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:09.627210   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.627218   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.627222   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.629180   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:09.629658   98140 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:09.629677   98140 pod_ready.go:82] duration metric: took 5.413516ms for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.629687   98140 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.629748   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:46:09.629757   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.629764   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.629769   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.631679   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:09.632348   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:09.632364   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.632374   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.632378   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.634293   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:09.634697   98140 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:09.634713   98140 pod_ready.go:82] duration metric: took 5.019458ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.634723   98140 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.634820   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:46:09.634830   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.634837   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.634841   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.636795   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:09.637252   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:09.637268   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.637276   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.637281   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.639167   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:09.639611   98140 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:09.639631   98140 pod_ready.go:82] duration metric: took 4.902601ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.639641   98140 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.639708   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:46:09.639717   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.639725   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.639732   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.641879   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:09.642460   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:09.642478   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.642488   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.642496   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.644444   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:09.644867   98140 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:09.644882   98140 pod_ready.go:82] duration metric: took 5.228432ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.644893   98140 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.644949   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:46:09.644957   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.644964   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.644972   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.646915   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:09.798541   98140 request.go:632] Waited for 151.004036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:09.798595   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:09.798600   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.798607   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.798611   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:09.801209   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:09.801451   98140 pod_ready.go:98] node "ha-107957-m03" hosting pod "etcd-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:09.801469   98140 pod_ready.go:82] duration metric: took 156.566994ms for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:09.801481   98140 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "etcd-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:09.801508   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:09.998970   98140 request.go:632] Waited for 197.375972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:46:09.999036   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:46:09.999044   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:09.999056   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:09.999064   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:10.002732   98140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:46:10.198786   98140 request.go:632] Waited for 195.422608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:10.198848   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:10.198854   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:10.198861   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:10.198866   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:10.201566   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:10.202133   98140 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:10.202152   98140 pod_ready.go:82] duration metric: took 400.63154ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:10.202162   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:10.398282   98140 request.go:632] Waited for 196.034665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:46:10.398347   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:46:10.398352   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:10.398359   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:10.398363   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:10.401211   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:10.599060   98140 request.go:632] Waited for 197.209939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:10.599119   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:10.599124   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:10.599146   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:10.599152   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:10.601159   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:10.601697   98140 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:10.601724   98140 pod_ready.go:82] duration metric: took 399.551329ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:10.601737   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:10.798763   98140 request.go:632] Waited for 196.951298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:46:10.798841   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:46:10.798846   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:10.798855   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:10.798861   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:10.801623   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:10.998685   98140 request.go:632] Waited for 196.387129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:10.998757   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:10.998764   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:10.998773   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:10.998779   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:11.001288   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:11.001446   98140 pod_ready.go:98] node "ha-107957-m03" hosting pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:11.001466   98140 pod_ready.go:82] duration metric: took 399.721665ms for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:11.001480   98140 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:11.001494   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:11.198807   98140 request.go:632] Waited for 197.227492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:46:11.198898   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:46:11.198909   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:11.198918   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:11.198925   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:11.201758   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:11.398850   98140 request.go:632] Waited for 196.368107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:11.398920   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:11.398931   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:11.398942   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:11.398950   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:11.401050   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:11.401582   98140 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:11.401606   98140 pod_ready.go:82] duration metric: took 400.09755ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:11.401619   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:11.598536   98140 request.go:632] Waited for 196.834478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:46:11.598596   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:46:11.598605   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:11.598613   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:11.598623   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:11.601417   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:11.798360   98140 request.go:632] Waited for 196.327017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:11.798415   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:11.798420   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:11.798427   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:11.798432   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:11.801324   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:11.801864   98140 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:11.801884   98140 pod_ready.go:82] duration metric: took 400.258667ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:11.801894   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:11.998970   98140 request.go:632] Waited for 196.981991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:46:11.999034   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:46:11.999040   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:11.999048   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:11.999056   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:12.002095   98140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:46:12.199193   98140 request.go:632] Waited for 196.410011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:12.199263   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:12.199268   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:12.199275   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:12.199278   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:12.201721   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:12.201830   98140 pod_ready.go:98] node "ha-107957-m03" hosting pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:12.201845   98140 pod_ready.go:82] duration metric: took 399.944157ms for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:12.201856   98140 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:12.201864   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:12.399167   98140 request.go:632] Waited for 197.238001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:46:12.399236   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:46:12.399242   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:12.399248   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:12.399252   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:12.401790   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:12.599265   98140 request.go:632] Waited for 196.744567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:12.599332   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:12.599340   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:12.599350   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:12.599359   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:12.602085   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:12.602622   98140 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:12.602644   98140 pod_ready.go:82] duration metric: took 400.769609ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:12.602657   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:12.798536   98140 request.go:632] Waited for 195.810549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:46:12.798615   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:46:12.798621   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:12.798631   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:12.798636   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:12.800980   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:12.998958   98140 request.go:632] Waited for 197.400412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:12.999010   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:12.999015   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:12.999022   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:12.999026   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:13.001516   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:13.001637   98140 pod_ready.go:98] node "ha-107957-m03" hosting pod "kube-proxy-f2scr" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:13.001652   98140 pod_ready.go:82] duration metric: took 398.987856ms for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:13.001661   98140 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "kube-proxy-f2scr" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:13.001671   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:13.199001   98140 request.go:632] Waited for 197.242468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:46:13.199053   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:46:13.199058   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:13.199065   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:13.199069   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:13.201816   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:13.398778   98140 request.go:632] Waited for 196.4056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:46:13.398869   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:46:13.398881   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:13.398894   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:13.398905   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:13.401767   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:13.402239   98140 pod_ready.go:93] pod "kube-proxy-hm8zn" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:13.402260   98140 pod_ready.go:82] duration metric: took 400.577311ms for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:13.402272   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:13.599293   98140 request.go:632] Waited for 196.950212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:46:13.599379   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:46:13.599387   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:13.599396   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:13.599406   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:13.602093   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:13.799127   98140 request.go:632] Waited for 196.345275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:13.799191   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:13.799197   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:13.799204   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:13.799208   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:13.802047   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:13.802534   98140 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:13.802553   98140 pod_ready.go:82] duration metric: took 400.274667ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:13.802563   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:13.998619   98140 request.go:632] Waited for 195.972392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:46:13.998684   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:46:13.998690   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:13.998697   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:13.998701   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:14.001422   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:14.199294   98140 request.go:632] Waited for 197.36307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:14.199372   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:14.199380   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:14.199539   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:14.199876   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:14.203830   98140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:46:14.204241   98140 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:14.204257   98140 pod_ready.go:82] duration metric: took 401.680607ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:14.204267   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:14.399277   98140 request.go:632] Waited for 194.937686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:46:14.399343   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:46:14.399348   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:14.399356   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:14.399360   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:14.401992   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:14.598905   98140 request.go:632] Waited for 196.354832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:14.598964   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:14.598970   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:14.598977   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:14.598981   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:14.601610   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:14.602190   98140 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:14.602215   98140 pod_ready.go:82] duration metric: took 397.941465ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:14.602229   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:14.799273   98140 request.go:632] Waited for 196.9648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:46:14.799339   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:46:14.799344   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:14.799351   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:14.799355   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:14.802278   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:14.999151   98140 request.go:632] Waited for 196.379528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:14.999238   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m03
	I0916 10:46:14.999246   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:14.999257   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:14.999263   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:15.002092   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:15.002215   98140 pod_ready.go:98] node "ha-107957-m03" hosting pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:15.002232   98140 pod_ready.go:82] duration metric: took 399.993383ms for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:15.002241   98140 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-107957-m03" hosting pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-107957-m03": nodes "ha-107957-m03" not found
	I0916 10:46:15.002249   98140 pod_ready.go:39] duration metric: took 5.404217728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:46:15.002266   98140 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:46:15.002314   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:15.502856   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:16.003462   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:16.503224   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:17.003115   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:17.502514   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:18.003416   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:18.503346   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:19.002852   98140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:19.013483   98140 api_server.go:72] duration metric: took 20.561213418s to wait for apiserver process to appear ...
	I0916 10:46:19.013511   98140 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:46:19.013534   98140 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:46:19.017303   98140 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:46:19.017396   98140 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:46:19.017408   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:19.017416   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:19.017421   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:19.018229   98140 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:46:19.018353   98140 api_server.go:141] control plane version: v1.31.1
	I0916 10:46:19.018385   98140 api_server.go:131] duration metric: took 4.86073ms to wait for apiserver health ...
	I0916 10:46:19.018396   98140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:46:19.018476   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:46:19.018485   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:19.018493   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:19.018500   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:19.023390   98140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:46:19.031785   98140 system_pods.go:59] 26 kube-system pods found
	I0916 10:46:19.031820   98140 system_pods.go:61] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:46:19.031831   98140 system_pods.go:61] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:46:19.031839   98140 system_pods.go:61] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:46:19.031845   98140 system_pods.go:61] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:46:19.031851   98140 system_pods.go:61] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:46:19.031857   98140 system_pods.go:61] "kindnet-4lkzl" [d08902f4-b63c-46cc-b388-c4fcbe8fc960] Running
	I0916 10:46:19.031866   98140 system_pods.go:61] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:46:19.031872   98140 system_pods.go:61] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:46:19.031878   98140 system_pods.go:61] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:46:19.031884   98140 system_pods.go:61] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:46:19.031892   98140 system_pods.go:61] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:46:19.031898   98140 system_pods.go:61] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:46:19.031904   98140 system_pods.go:61] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:46:19.031912   98140 system_pods.go:61] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:46:19.031918   98140 system_pods.go:61] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:46:19.031926   98140 system_pods.go:61] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:46:19.031931   98140 system_pods.go:61] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:46:19.031937   98140 system_pods.go:61] "kube-proxy-hm8zn" [6ea6916e-f34c-42b3-996b-033915687fd1] Running
	I0916 10:46:19.031944   98140 system_pods.go:61] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:46:19.031951   98140 system_pods.go:61] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:46:19.031958   98140 system_pods.go:61] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:46:19.031963   98140 system_pods.go:61] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:46:19.031970   98140 system_pods.go:61] "kube-vip-ha-107957" [d508299d-30c6-4f09-8f93-04280ddc9c11] Running
	I0916 10:46:19.031976   98140 system_pods.go:61] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:46:19.031984   98140 system_pods.go:61] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:46:19.031989   98140 system_pods.go:61] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:46:19.031995   98140 system_pods.go:74] duration metric: took 13.590211ms to wait for pod list to return data ...
	I0916 10:46:19.032005   98140 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:46:19.032087   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:46:19.032097   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:19.032107   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:19.032112   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:19.034957   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:19.035161   98140 default_sa.go:45] found service account: "default"
	I0916 10:46:19.035176   98140 default_sa.go:55] duration metric: took 3.162111ms for default service account to be created ...
	I0916 10:46:19.035183   98140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:46:19.035249   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:46:19.035260   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:19.035268   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:19.035272   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:19.039449   98140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:46:19.046429   98140 system_pods.go:86] 26 kube-system pods found
	I0916 10:46:19.046459   98140 system_pods.go:89] "coredns-7c65d6cfc9-mhp28" [4f79459d-4e48-4320-a873-30ad21c7ea25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:46:19.046468   98140 system_pods.go:89] "coredns-7c65d6cfc9-t9xdr" [e2bc879b-a96e-43bb-a253-47a8fa737826] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:46:19.046476   98140 system_pods.go:89] "etcd-ha-107957" [928c96a3-f800-4899-9c01-c9a52233dea3] Running
	I0916 10:46:19.046481   98140 system_pods.go:89] "etcd-ha-107957-m02" [d55e235e-d148-4432-9f21-55881fc9297f] Running
	I0916 10:46:19.046487   98140 system_pods.go:89] "etcd-ha-107957-m03" [f49bb9d2-e8d8-4cd5-9fb5-209b18bab0d6] Running
	I0916 10:46:19.046491   98140 system_pods.go:89] "kindnet-4lkzl" [d08902f4-b63c-46cc-b388-c4fcbe8fc960] Running
	I0916 10:46:19.046495   98140 system_pods.go:89] "kindnet-rcsxv" [d1779a0d-03eb-43b3-8d72-8337eaa1499b] Running
	I0916 10:46:19.046500   98140 system_pods.go:89] "kindnet-rwcs2" [df0e02e3-2a14-48fb-8f07-47dd836c8ea4] Running
	I0916 10:46:19.046504   98140 system_pods.go:89] "kindnet-sjkjx" [c4f606aa-4614-4e16-8bce-076ae293e21a] Running
	I0916 10:46:19.046508   98140 system_pods.go:89] "kube-apiserver-ha-107957" [3825580c-d1f8-4c6e-9475-6640cb559753] Running
	I0916 10:46:19.046513   98140 system_pods.go:89] "kube-apiserver-ha-107957-m02" [5a1908b5-ba28-4fba-8214-b22d178e165f] Running
	I0916 10:46:19.046517   98140 system_pods.go:89] "kube-apiserver-ha-107957-m03" [bdc207e5-f06b-47a6-86cd-df280829147f] Running
	I0916 10:46:19.046522   98140 system_pods.go:89] "kube-controller-manager-ha-107957" [b42baa8d-5f80-478c-8b69-1e055b32ba16] Running
	I0916 10:46:19.046531   98140 system_pods.go:89] "kube-controller-manager-ha-107957-m02" [a7514b4b-19a7-457c-8289-dafc7a7acfc1] Running
	I0916 10:46:19.046535   98140 system_pods.go:89] "kube-controller-manager-ha-107957-m03" [e836efd1-067a-4d7c-be3d-6ef190cf7ed4] Running
	I0916 10:46:19.046541   98140 system_pods.go:89] "kube-proxy-5ctr8" [ae19e764-5020-48d7-9e34-adc329e8c502] Running
	I0916 10:46:19.046546   98140 system_pods.go:89] "kube-proxy-f2scr" [b1fd292f-fcfd-4497-a3bf-37e0ed570a39] Running
	I0916 10:46:19.046558   98140 system_pods.go:89] "kube-proxy-hm8zn" [6ea6916e-f34c-42b3-996b-033915687fd1] Running
	I0916 10:46:19.046563   98140 system_pods.go:89] "kube-proxy-qtxh9" [48f3069d-9155-420d-80a9-8cd30c6cf8bb] Running
	I0916 10:46:19.046568   98140 system_pods.go:89] "kube-scheduler-ha-107957" [54cd4b38-f7ac-495c-a72a-d01708ffc607] Running
	I0916 10:46:19.046575   98140 system_pods.go:89] "kube-scheduler-ha-107957-m02" [a549a5e4-72b6-4ba6-9528-8cec3bc03f09] Running
	I0916 10:46:19.046579   98140 system_pods.go:89] "kube-scheduler-ha-107957-m03" [4c2f1d08-11bf-4d79-b5e0-3c63f35bddc1] Running
	I0916 10:46:19.046583   98140 system_pods.go:89] "kube-vip-ha-107957" [d508299d-30c6-4f09-8f93-04280ddc9c11] Running
	I0916 10:46:19.046587   98140 system_pods.go:89] "kube-vip-ha-107957-m02" [82ffbd87-5c82-4534-a81f-276db9121f2a] Running
	I0916 10:46:19.046590   98140 system_pods.go:89] "kube-vip-ha-107957-m03" [0c974aec-d6d3-4833-ae07-50fa862903eb] Running
	I0916 10:46:19.046593   98140 system_pods.go:89] "storage-provisioner" [7b4f4924-ccac-42ba-983c-5ac7e0696277] Running
	I0916 10:46:19.046599   98140 system_pods.go:126] duration metric: took 11.411473ms to wait for k8s-apps to be running ...
	I0916 10:46:19.046610   98140 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:46:19.046657   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:46:19.058166   98140 system_svc.go:56] duration metric: took 11.544851ms WaitForService to wait for kubelet
	I0916 10:46:19.058197   98140 kubeadm.go:582] duration metric: took 20.605931395s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:46:19.058218   98140 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:46:19.058318   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:46:19.058328   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:19.058335   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:19.058341   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:19.061123   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:19.062386   98140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:46:19.062413   98140 node_conditions.go:123] node cpu capacity is 8
	I0916 10:46:19.062424   98140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:46:19.062429   98140 node_conditions.go:123] node cpu capacity is 8
	I0916 10:46:19.062434   98140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:46:19.062439   98140 node_conditions.go:123] node cpu capacity is 8
	I0916 10:46:19.062443   98140 node_conditions.go:105] duration metric: took 4.220956ms to run NodePressure ...
	I0916 10:46:19.062459   98140 start.go:241] waiting for startup goroutines ...
	I0916 10:46:19.062491   98140 start.go:255] writing updated cluster config ...
	I0916 10:46:19.064755   98140 out.go:201] 
	I0916 10:46:19.066463   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:46:19.066562   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:46:19.068415   98140 out.go:177] * Starting "ha-107957-m04" worker node in "ha-107957" cluster
	I0916 10:46:19.070168   98140 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:46:19.071370   98140 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:46:19.072682   98140 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:46:19.072706   98140 cache.go:56] Caching tarball of preloaded images
	I0916 10:46:19.072709   98140 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:46:19.072814   98140 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:46:19.072828   98140 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:46:19.072917   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	W0916 10:46:19.092326   98140 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:46:19.092343   98140 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:46:19.092416   98140 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:46:19.092432   98140 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:46:19.092439   98140 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:46:19.092446   98140 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:46:19.092453   98140 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:46:19.093623   98140 image.go:273] response: 
	I0916 10:46:19.149569   98140 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:46:19.149603   98140 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:46:19.149641   98140 start.go:360] acquireMachinesLock for ha-107957-m04: {Name:mk140f36fe9b3ae2aca73cd487e78881b966d113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:46:19.149720   98140 start.go:364] duration metric: took 58.791µs to acquireMachinesLock for "ha-107957-m04"
	I0916 10:46:19.149744   98140 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:46:19.149755   98140 fix.go:54] fixHost starting: m04
	I0916 10:46:19.150005   98140 cli_runner.go:164] Run: docker container inspect ha-107957-m04 --format={{.State.Status}}
	I0916 10:46:19.167478   98140 fix.go:112] recreateIfNeeded on ha-107957-m04: state=Stopped err=<nil>
	W0916 10:46:19.167518   98140 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:46:19.169708   98140 out.go:177] * Restarting existing docker container for "ha-107957-m04" ...
	I0916 10:46:19.171074   98140 cli_runner.go:164] Run: docker start ha-107957-m04
	I0916 10:46:19.468175   98140 cli_runner.go:164] Run: docker container inspect ha-107957-m04 --format={{.State.Status}}
	I0916 10:46:19.488518   98140 kic.go:430] container "ha-107957-m04" state is running.
	I0916 10:46:19.488849   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m04
	I0916 10:46:19.513313   98140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/config.json ...
	I0916 10:46:19.513647   98140 machine.go:93] provisionDockerMachine start ...
	I0916 10:46:19.513727   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:19.534928   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:19.535149   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0916 10:46:19.535166   98140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:46:19.535909   98140 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40724->127.0.0.1:32838: read: connection reset by peer
	I0916 10:46:22.669214   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m04
	
	I0916 10:46:22.669251   98140 ubuntu.go:169] provisioning hostname "ha-107957-m04"
	I0916 10:46:22.669301   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:22.687798   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:22.687981   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0916 10:46:22.687994   98140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-107957-m04 && echo "ha-107957-m04" | sudo tee /etc/hostname
	I0916 10:46:22.836777   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-107957-m04
	
	I0916 10:46:22.836868   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:22.853902   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:22.854167   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0916 10:46:22.854196   98140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-107957-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-107957-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-107957-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:46:22.993509   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:46:22.993540   98140 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:46:22.993561   98140 ubuntu.go:177] setting up certificates
	I0916 10:46:22.993575   98140 provision.go:84] configureAuth start
	I0916 10:46:22.993631   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m04
	I0916 10:46:23.011552   98140 provision.go:143] copyHostCerts
	I0916 10:46:23.011594   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:46:23.011633   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:46:23.011646   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:46:23.011724   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:46:23.011821   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:46:23.011852   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:46:23.011862   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:46:23.011906   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:46:23.011968   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:46:23.011993   98140 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:46:23.012000   98140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:46:23.012037   98140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:46:23.012108   98140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.ha-107957-m04 san=[127.0.0.1 192.168.49.5 ha-107957-m04 localhost minikube]
	I0916 10:46:23.184584   98140 provision.go:177] copyRemoteCerts
	I0916 10:46:23.184647   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:46:23.184682   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:23.202642   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:46:23.298613   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:46:23.298682   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:46:23.323267   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:46:23.323333   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:46:23.346509   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:46:23.346574   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:46:23.371624   98140 provision.go:87] duration metric: took 378.032746ms to configureAuth
	I0916 10:46:23.371659   98140 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:46:23.371903   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:46:23.371996   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:23.389559   98140 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:23.389753   98140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0916 10:46:23.389771   98140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:46:23.646156   98140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:46:23.646178   98140 machine.go:96] duration metric: took 4.132511211s to provisionDockerMachine
	I0916 10:46:23.646191   98140 start.go:293] postStartSetup for "ha-107957-m04" (driver="docker")
	I0916 10:46:23.646203   98140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:46:23.646264   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:46:23.646316   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:23.665319   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:46:23.762900   98140 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:46:23.766305   98140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:46:23.766343   98140 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:46:23.766351   98140 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:46:23.766358   98140 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:46:23.766367   98140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:46:23.766428   98140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:46:23.766517   98140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:46:23.766562   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:46:23.766721   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:46:23.775401   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:46:23.797729   98140 start.go:296] duration metric: took 151.523033ms for postStartSetup
	I0916 10:46:23.797807   98140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:46:23.797855   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:23.814877   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:46:23.906432   98140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:46:23.911137   98140 fix.go:56] duration metric: took 4.761374378s for fixHost
	I0916 10:46:23.911164   98140 start.go:83] releasing machines lock for "ha-107957-m04", held for 4.761430679s
	I0916 10:46:23.911232   98140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m04
	I0916 10:46:23.932085   98140 out.go:177] * Found network options:
	I0916 10:46:23.933712   98140 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:46:23.935043   98140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:46:23.935071   98140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:46:23.935092   98140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:46:23.935100   98140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:46:23.935178   98140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:46:23.935228   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:23.935274   98140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:46:23.935327   98140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:46:23.953497   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:46:23.953604   98140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:46:24.179938   98140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:46:24.184330   98140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:46:24.192495   98140 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:46:24.192570   98140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:46:24.200839   98140 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:46:24.200863   98140 start.go:495] detecting cgroup driver to use...
	I0916 10:46:24.200895   98140 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:46:24.200940   98140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:46:24.212349   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:46:24.222986   98140 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:46:24.223036   98140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:46:24.234403   98140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:46:24.245678   98140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:46:24.322761   98140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:46:24.405785   98140 docker.go:233] disabling docker service ...
	I0916 10:46:24.405850   98140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:46:24.417246   98140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:46:24.428502   98140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:46:24.507428   98140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:46:24.583607   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:46:24.594379   98140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:46:24.609496   98140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:46:24.609545   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:46:24.618745   98140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:46:24.618818   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:46:24.628377   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:46:24.638236   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:46:24.647858   98140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:46:24.656568   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:46:24.665921   98140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:46:24.674664   98140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:46:24.684077   98140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:46:24.692202   98140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:46:24.700265   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:24.776639   98140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:46:24.892301   98140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:46:24.892372   98140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:46:24.895929   98140 start.go:563] Will wait 60s for crictl version
	I0916 10:46:24.895982   98140 ssh_runner.go:195] Run: which crictl
	I0916 10:46:24.899066   98140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:46:24.933370   98140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:46:24.933462   98140 ssh_runner.go:195] Run: crio --version
	I0916 10:46:24.968492   98140 ssh_runner.go:195] Run: crio --version
	I0916 10:46:25.005453   98140 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:46:25.006934   98140 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:46:25.008513   98140 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:46:25.009940   98140 cli_runner.go:164] Run: docker network inspect ha-107957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:46:25.028880   98140 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:46:25.032413   98140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:46:25.043101   98140 mustload.go:65] Loading cluster: ha-107957
	I0916 10:46:25.043308   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:46:25.043493   98140 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:46:25.060731   98140 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:46:25.060961   98140 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957 for IP: 192.168.49.5
	I0916 10:46:25.060972   98140 certs.go:194] generating shared ca certs ...
	I0916 10:46:25.060987   98140 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:25.061116   98140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:46:25.061167   98140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:46:25.061183   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:46:25.061207   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:46:25.061222   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:46:25.061240   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:46:25.061307   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:46:25.061396   98140 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:46:25.061410   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:46:25.061447   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:46:25.061480   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:46:25.061512   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:46:25.061570   98140 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:46:25.061603   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:46:25.061623   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:25.061642   98140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:46:25.061675   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:46:25.084612   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:46:25.107054   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:46:25.129718   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:46:25.151599   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:46:25.174677   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:46:25.196826   98140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:46:25.219344   98140 ssh_runner.go:195] Run: openssl version
	I0916 10:46:25.224738   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:46:25.233490   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:25.236752   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:25.236798   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:25.243119   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:46:25.250916   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:46:25.259540   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:46:25.262684   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:46:25.262734   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:46:25.269047   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:46:25.276891   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:46:25.285146   98140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:46:25.288184   98140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:46:25.288229   98140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:46:25.294389   98140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:46:25.302083   98140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:46:25.304899   98140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:46:25.304944   98140 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0916 10:46:25.305031   98140 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-107957-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-107957 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:46:25.305083   98140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:46:25.313081   98140 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:46:25.313140   98140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:46:25.320948   98140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 10:46:25.336594   98140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:46:25.352405   98140 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:46:25.355378   98140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:46:25.365383   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:25.439224   98140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:46:25.451041   98140 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0916 10:46:25.451268   98140 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:46:25.453109   98140 out.go:177] * Verifying Kubernetes components...
	I0916 10:46:25.454631   98140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:25.531413   98140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:46:25.543138   98140 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:46:25.543354   98140 kapi.go:59] client config for ha-107957: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/ha-107957/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:46:25.543409   98140 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:46:25.543609   98140 node_ready.go:35] waiting up to 6m0s for node "ha-107957-m04" to be "Ready" ...
	I0916 10:46:25.543686   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:46:25.543694   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:25.543701   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:25.543705   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:25.546469   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:25.546961   98140 node_ready.go:49] node "ha-107957-m04" has status "Ready":"True"
	I0916 10:46:25.546981   98140 node_ready.go:38] duration metric: took 3.35805ms for node "ha-107957-m04" to be "Ready" ...
	I0916 10:46:25.546989   98140 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:46:25.547047   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:46:25.547056   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:25.547063   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:25.547067   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:25.551725   98140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:46:25.558788   98140 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:25.558892   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:25.558903   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:25.558911   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:25.558914   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:25.561444   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:25.562147   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:25.562168   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:25.562186   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:25.562197   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:25.564345   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:26.059076   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:26.059102   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:26.059113   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:26.059119   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:26.061908   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:26.062485   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:26.062501   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:26.062508   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:26.062513   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:26.064770   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:26.559740   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:26.559762   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:26.559770   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:26.559773   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:26.562669   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:26.563369   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:26.563387   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:26.563397   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:26.563406   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:26.565721   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:27.059571   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:27.059591   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:27.059599   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:27.059603   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:27.062314   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:27.062917   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:27.062931   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:27.062939   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:27.062944   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:27.065042   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:27.558997   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:27.559023   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:27.559032   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:27.559036   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:27.561772   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:27.562395   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:27.562410   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:27.562417   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:27.562423   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:27.564766   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:27.565191   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:28.059525   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:28.059546   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:28.059556   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:28.059562   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:28.062231   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:28.062839   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:28.062856   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:28.062862   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:28.062867   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:28.064979   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:28.559516   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:28.559543   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:28.559554   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:28.559561   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:28.562476   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:28.563053   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:28.563071   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:28.563078   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:28.563153   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:28.565445   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:29.059234   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:29.059265   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:29.059276   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:29.059280   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:29.062142   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:29.062825   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:29.062842   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:29.062849   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:29.062853   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:29.065228   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:29.559007   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:29.559036   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:29.559045   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:29.559049   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:29.561706   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:29.562291   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:29.562307   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:29.562314   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:29.562317   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:29.564331   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:30.059175   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:30.059197   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:30.059208   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:30.059215   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:30.061716   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:30.062328   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:30.062345   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:30.062352   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:30.062357   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:30.064321   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:30.065468   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:30.559073   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:30.559100   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:30.559110   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:30.559116   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:30.563426   98140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:46:30.564075   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:30.564095   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:30.564103   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:30.564107   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:30.566515   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:31.059278   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:31.059297   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:31.059305   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:31.059311   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:31.062066   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:31.062825   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:31.062846   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:31.062856   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:31.062862   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:31.065152   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:31.560001   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:31.560027   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:31.560038   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:31.560048   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:31.562608   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:31.563236   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:31.563255   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:31.563264   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:31.563272   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:31.565495   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:32.059273   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:32.059293   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:32.059302   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:32.059306   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:32.061956   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:32.062558   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:32.062573   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:32.062581   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:32.062585   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:32.064845   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:32.559042   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:32.559062   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:32.559069   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:32.559075   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:32.561952   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:32.562712   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:32.562734   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:32.562745   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:32.562751   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:32.565238   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:32.565810   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:33.059081   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:33.059106   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:33.059117   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:33.059123   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:33.061763   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:33.062354   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:33.062374   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:33.062382   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:33.062387   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:33.064435   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:33.559088   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:33.559111   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:33.559119   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:33.559124   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:33.561974   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:33.562650   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:33.562671   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:33.562682   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:33.562690   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:33.564901   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:34.059672   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:34.059691   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:34.059698   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:34.059703   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:34.062396   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:34.063028   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:34.063044   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:34.063051   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:34.063055   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:34.065205   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:34.558991   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:34.559013   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:34.559023   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:34.559053   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:34.561654   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:34.562259   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:34.562275   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:34.562285   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:34.562290   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:34.564499   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:35.059293   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:35.059318   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:35.059327   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:35.059332   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:35.061916   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:35.062594   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:35.062611   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:35.062618   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:35.062627   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:35.064685   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:35.065254   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:35.559642   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:35.559668   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:35.559676   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:35.559681   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:35.562376   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:35.562958   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:35.562974   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:35.562982   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:35.562985   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:35.565235   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:36.059013   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:36.059033   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:36.059043   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:36.059047   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:36.061658   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:36.062253   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:36.062267   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:36.062274   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:36.062279   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:36.064282   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:36.559061   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:36.559084   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:36.559093   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:36.559096   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:36.561876   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:36.562467   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:36.562484   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:36.562492   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:36.562496   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:36.564871   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:37.059673   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:37.059694   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:37.059701   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:37.059705   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:37.062526   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:37.063120   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:37.063137   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:37.063144   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:37.063151   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:37.065194   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:37.065687   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:37.559068   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:37.559090   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:37.559097   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:37.559102   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:37.562162   98140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:46:37.562861   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:37.562878   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:37.562888   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:37.562898   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:37.565089   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:38.059872   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:38.059893   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:38.059903   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:38.059914   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:38.062800   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:38.063516   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:38.063536   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:38.063544   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:38.063549   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:38.065808   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:38.559686   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:38.559706   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:38.559715   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:38.559721   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:38.562417   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:38.563018   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:38.563035   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:38.563042   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:38.563045   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:38.565418   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:39.059172   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:39.059192   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:39.059204   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:39.059209   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:39.061896   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:39.062586   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:39.062601   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:39.062608   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:39.062614   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:39.064644   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:39.559571   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:39.559590   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:39.559597   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:39.559601   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:39.562782   98140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:46:39.563460   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:39.563477   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:39.563484   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:39.563489   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:39.565763   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:39.566228   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:40.059842   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:40.059861   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:40.059869   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:40.059874   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:40.062601   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:40.063243   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:40.063261   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:40.063269   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:40.063272   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:40.065516   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:40.559321   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:40.559340   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:40.559348   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:40.559352   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:40.562369   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:40.562946   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:40.562961   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:40.562969   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:40.562973   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:40.565454   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:41.059115   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:41.059136   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:41.059143   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:41.059148   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:41.062093   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:41.062751   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:41.062768   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:41.062776   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:41.062780   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:41.064774   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:41.559612   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:41.559641   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:41.559649   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:41.559654   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:41.562357   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:41.562952   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:41.562970   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:41.562977   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:41.562982   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:41.565505   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:42.059198   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:42.059224   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:42.059234   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:42.059239   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:42.062047   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:42.062683   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:42.062700   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:42.062707   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:42.062713   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:42.065124   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:42.065672   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:42.559346   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:42.559366   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:42.559375   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:42.559379   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:42.562139   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:42.562783   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:42.562799   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:42.562806   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:42.562810   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:42.564943   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:43.059852   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:43.059875   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:43.059883   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:43.059886   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:43.062798   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:43.063434   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:43.063452   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:43.063459   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:43.063464   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:43.065878   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:43.559783   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:43.559803   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:43.559810   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:43.559815   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:43.562738   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:43.563445   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:43.563467   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:43.563477   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:43.563481   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:43.565716   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:44.059709   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:44.059729   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:44.059736   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:44.059741   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:44.062374   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:44.062998   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:44.063015   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:44.063023   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:44.063029   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:44.065307   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:44.065793   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:44.559244   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:44.559262   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:44.559270   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:44.559273   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:44.561961   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:44.562570   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:44.562588   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:44.562597   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:44.562600   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:44.564782   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:45.059604   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:45.059624   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:45.059632   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:45.059635   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:45.062246   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:45.062833   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:45.062848   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:45.062855   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:45.062859   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:45.064921   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:45.559784   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:45.559802   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:45.559809   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:45.559816   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:45.562443   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:45.563036   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:45.563053   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:45.563060   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:45.563065   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:45.565225   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:46.059115   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:46.059140   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:46.059149   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:46.059152   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:46.061949   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:46.062644   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:46.062660   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:46.062666   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:46.062672   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:46.064718   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:46.559484   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:46.559503   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:46.559511   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:46.559514   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:46.562328   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:46.562950   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:46.562967   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:46.562974   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:46.562980   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:46.565210   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:46.565663   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:47.059004   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:47.059028   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:47.059044   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:47.059051   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:47.061889   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:47.062491   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:47.062507   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:47.062514   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:47.062520   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:47.064649   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:47.559621   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:47.559642   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:47.559652   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:47.559656   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:47.562295   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:47.563025   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:47.563042   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:47.563051   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:47.563056   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:47.565558   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:48.059336   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:48.059355   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:48.059362   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:48.059367   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:48.061990   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:48.062586   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:48.062601   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:48.062608   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:48.062613   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:48.064791   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:48.559724   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:48.559745   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:48.559753   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:48.559756   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:48.562344   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:48.563040   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:48.563056   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:48.563064   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:48.563067   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:48.565232   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:48.565703   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:49.059869   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:49.059895   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:49.059902   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:49.059906   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:49.062663   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:49.063276   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:49.063291   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:49.063298   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:49.063302   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:49.065483   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:49.559451   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:49.559470   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:49.559477   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:49.559482   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:49.562328   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:49.563098   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:49.563119   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:49.563129   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:49.563136   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:49.565282   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:50.059530   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:50.059570   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:50.059578   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:50.059583   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:50.062522   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:50.063160   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:50.063176   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:50.063182   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:50.063186   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:50.065673   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:50.559581   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:50.559603   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:50.559614   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:50.559618   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:50.562388   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:50.563098   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:50.563117   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:50.563128   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:50.563136   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:50.565447   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:50.565863   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:51.059102   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:51.059121   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:51.059130   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:51.059133   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:51.062083   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:51.062672   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:51.062688   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:51.062696   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:51.062700   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:51.064869   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:51.559700   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:51.559718   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:51.559726   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:51.559731   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:51.562462   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:51.563047   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:51.563062   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:51.563069   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:51.563073   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:51.565190   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:52.058991   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:52.059011   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:52.059018   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:52.059022   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:52.061514   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:52.062231   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:52.062249   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:52.062256   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:52.062260   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:52.064270   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:52.559387   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:52.559410   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:52.559417   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:52.559421   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:52.562181   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:52.562875   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:52.562893   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:52.562915   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:52.562924   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:52.565000   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:53.059790   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:53.059810   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:53.059818   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:53.059821   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:53.062426   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:53.063032   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:53.063053   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:53.063060   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:53.063065   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:53.065120   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:53.065531   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:53.559962   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:53.559982   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:53.559990   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:53.559994   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:53.562914   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:53.563512   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:53.563529   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:53.563536   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:53.563540   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:53.565973   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:54.059847   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:54.059867   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:54.059874   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:54.059878   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:54.062699   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:54.063320   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:54.063335   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:54.063341   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:54.063345   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:54.065364   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:54.559184   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:54.559204   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:54.559213   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:54.559218   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:54.561693   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:54.562273   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:54.562288   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:54.562297   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:54.562301   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:54.564679   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:55.059077   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:55.059117   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:55.059125   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:55.059129   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:55.061994   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:55.062633   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:55.062650   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:55.062657   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:55.062663   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:55.064937   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:55.559772   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:55.559796   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:55.559808   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:55.559825   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:55.562476   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:55.563080   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:55.563094   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:55.563103   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:55.563111   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:55.565423   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:55.565886   98140 pod_ready.go:103] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"False"
	I0916 10:46:56.059898   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:56.059917   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.059925   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.059928   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.062365   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:56.062956   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:56.062970   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.062979   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.062982   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.064935   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.559703   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mhp28
	I0916 10:46:56.559723   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.559732   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.559735   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.562517   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:56.563131   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:56.563148   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.563156   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.563160   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.565385   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:56.565866   98140 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:56.565884   98140 pod_ready.go:82] duration metric: took 31.007068939s for pod "coredns-7c65d6cfc9-mhp28" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.565895   98140 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.565957   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-t9xdr
	I0916 10:46:56.565965   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.565972   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.565976   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.567966   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.568507   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:56.568522   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.568530   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.568534   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.570485   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.570874   98140 pod_ready.go:93] pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:56.570889   98140 pod_ready.go:82] duration metric: took 4.98866ms for pod "coredns-7c65d6cfc9-t9xdr" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.570899   98140 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.570949   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957
	I0916 10:46:56.570957   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.570964   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.570970   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.572976   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.573522   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:56.573538   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.573547   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.573553   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.575491   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.575904   98140 pod_ready.go:93] pod "etcd-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:56.575920   98140 pod_ready.go:82] duration metric: took 5.015968ms for pod "etcd-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.575929   98140 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.575975   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m02
	I0916 10:46:56.575983   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.575990   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.575993   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.577912   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.578402   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:56.578415   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.578421   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.578426   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.580097   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.580471   98140 pod_ready.go:93] pod "etcd-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:56.580487   98140 pod_ready.go:82] duration metric: took 4.552035ms for pod "etcd-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.580498   98140 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.580551   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-107957-m03
	I0916 10:46:56.580561   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.580570   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.580577   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.582329   98140 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0916 10:46:56.582412   98140 pod_ready.go:98] error getting pod "etcd-ha-107957-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-107957-m03" not found
	I0916 10:46:56.582427   98140 pod_ready.go:82] duration metric: took 1.920703ms for pod "etcd-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:56.582439   98140 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "etcd-ha-107957-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-107957-m03" not found
	I0916 10:46:56.582455   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.582508   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957
	I0916 10:46:56.582516   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.582524   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.582535   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.584385   98140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:46:56.760328   98140 request.go:632] Waited for 175.326839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:56.760391   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:56.760399   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.760409   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.760416   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.763106   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:56.763523   98140 pod_ready.go:93] pod "kube-apiserver-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:56.763541   98140 pod_ready.go:82] duration metric: took 181.07777ms for pod "kube-apiserver-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.763550   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:56.959924   98140 request.go:632] Waited for 196.308204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:46:56.960012   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m02
	I0916 10:46:56.960020   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:56.960030   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:56.960039   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:56.962723   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:57.160222   98140 request.go:632] Waited for 196.871272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:57.160293   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:57.160301   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:57.160317   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:57.160334   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:57.163041   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:57.163477   98140 pod_ready.go:93] pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:57.163492   98140 pod_ready.go:82] duration metric: took 399.937359ms for pod "kube-apiserver-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:57.163501   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:57.360662   98140 request.go:632] Waited for 197.099166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:46:57.360731   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-107957-m03
	I0916 10:46:57.360737   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:57.360748   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:57.360754   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:57.363426   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:57.363553   98140 pod_ready.go:98] error getting pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-107957-m03" not found
	I0916 10:46:57.363568   98140 pod_ready.go:82] duration metric: took 200.05949ms for pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:57.363578   98140 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-107957-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-107957-m03" not found
	I0916 10:46:57.363586   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:57.560059   98140 request.go:632] Waited for 196.403617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:46:57.560122   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957
	I0916 10:46:57.560128   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:57.560135   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:57.560139   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:57.563102   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:57.759950   98140 request.go:632] Waited for 196.271481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:57.760005   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:57.760013   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:57.760024   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:57.760031   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:57.762624   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:57.763060   98140 pod_ready.go:93] pod "kube-controller-manager-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:57.763077   98140 pod_ready.go:82] duration metric: took 399.484745ms for pod "kube-controller-manager-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:57.763087   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:57.960181   98140 request.go:632] Waited for 197.01267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:46:57.960258   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m02
	I0916 10:46:57.960267   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:57.960277   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:57.960285   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:57.963152   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:58.160080   98140 request.go:632] Waited for 196.363153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:58.160165   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:58.160172   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:58.160182   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:58.160187   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:58.162967   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:58.163421   98140 pod_ready.go:93] pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:58.163441   98140 pod_ready.go:82] duration metric: took 400.344639ms for pod "kube-controller-manager-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:58.163454   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:58.360447   98140 request.go:632] Waited for 196.924388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:46:58.360509   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-107957-m03
	I0916 10:46:58.360514   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:58.360521   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:58.360525   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:58.363167   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:58.363280   98140 pod_ready.go:98] error getting pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-107957-m03" not found
	I0916 10:46:58.363294   98140 pod_ready.go:82] duration metric: took 199.834072ms for pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:58.363304   98140 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-107957-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-107957-m03" not found
	I0916 10:46:58.363311   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:58.560756   98140 request.go:632] Waited for 197.366522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:46:58.560806   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ctr8
	I0916 10:46:58.560812   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:58.560821   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:58.560827   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:58.563572   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:58.760568   98140 request.go:632] Waited for 196.362441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:58.760638   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:46:58.760646   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:58.760656   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:58.760662   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:58.763246   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:58.763719   98140 pod_ready.go:93] pod "kube-proxy-5ctr8" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:58.763737   98140 pod_ready.go:82] duration metric: took 400.419938ms for pod "kube-proxy-5ctr8" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:58.763746   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:58.959701   98140 request.go:632] Waited for 195.895745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:46:58.959795   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2scr
	I0916 10:46:58.959806   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:58.959817   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:58.959827   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:58.962375   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:46:58.962532   98140 pod_ready.go:98] error getting pod "kube-proxy-f2scr" in "kube-system" namespace (skipping!): pods "kube-proxy-f2scr" not found
	I0916 10:46:58.962552   98140 pod_ready.go:82] duration metric: took 198.798734ms for pod "kube-proxy-f2scr" in "kube-system" namespace to be "Ready" ...
	E0916 10:46:58.962571   98140 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-f2scr" in "kube-system" namespace (skipping!): pods "kube-proxy-f2scr" not found
	I0916 10:46:58.962585   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:59.160095   98140 request.go:632] Waited for 197.427264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:46:59.160189   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hm8zn
	I0916 10:46:59.160195   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:59.160204   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:59.160209   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:59.162945   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:59.360670   98140 request.go:632] Waited for 197.088771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:46:59.360737   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m04
	I0916 10:46:59.360743   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:59.360760   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:59.360767   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:59.363416   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:59.363884   98140 pod_ready.go:93] pod "kube-proxy-hm8zn" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:59.363915   98140 pod_ready.go:82] duration metric: took 401.30801ms for pod "kube-proxy-hm8zn" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:59.363932   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:59.559914   98140 request.go:632] Waited for 195.908464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:46:59.560001   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtxh9
	I0916 10:46:59.560012   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:59.560023   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:59.560035   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:59.562716   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:59.760553   98140 request.go:632] Waited for 197.206829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:59.760604   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:46:59.760611   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:59.760620   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:59.760627   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:59.763527   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:46:59.764033   98140 pod_ready.go:93] pod "kube-proxy-qtxh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:46:59.764050   98140 pod_ready.go:82] duration metric: took 400.109289ms for pod "kube-proxy-qtxh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:59.764060   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:46:59.960113   98140 request.go:632] Waited for 195.980906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:46:59.960191   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957
	I0916 10:46:59.960200   98140 round_trippers.go:469] Request Headers:
	I0916 10:46:59.960207   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:46:59.960215   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:46:59.962914   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:00.159724   98140 request.go:632] Waited for 196.288095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:47:00.159812   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957
	I0916 10:47:00.159821   98140 round_trippers.go:469] Request Headers:
	I0916 10:47:00.159828   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:00.159832   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:00.162500   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:00.163021   98140 pod_ready.go:93] pod "kube-scheduler-ha-107957" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:00.163041   98140 pod_ready.go:82] duration metric: took 398.971909ms for pod "kube-scheduler-ha-107957" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:00.163054   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:00.360240   98140 request.go:632] Waited for 197.113728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:47:00.360299   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m02
	I0916 10:47:00.360304   98140 round_trippers.go:469] Request Headers:
	I0916 10:47:00.360312   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:00.360315   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:00.363534   98140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:47:00.559934   98140 request.go:632] Waited for 195.326817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:47:00.559997   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-107957-m02
	I0916 10:47:00.560002   98140 round_trippers.go:469] Request Headers:
	I0916 10:47:00.560014   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:00.560017   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:00.562718   98140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:00.563174   98140 pod_ready.go:93] pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:00.563193   98140 pod_ready.go:82] duration metric: took 400.132418ms for pod "kube-scheduler-ha-107957-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:00.563209   98140 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:00.760338   98140 request.go:632] Waited for 197.047978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:47:00.760404   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-107957-m03
	I0916 10:47:00.760411   98140 round_trippers.go:469] Request Headers:
	I0916 10:47:00.760419   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:00.760426   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:00.763166   98140 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:47:00.763291   98140 pod_ready.go:98] error getting pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-107957-m03" not found
	I0916 10:47:00.763306   98140 pod_ready.go:82] duration metric: took 200.086714ms for pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:47:00.763315   98140 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-107957-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-107957-m03" not found
	I0916 10:47:00.763326   98140 pod_ready.go:39] duration metric: took 35.216328139s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:47:00.763344   98140 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:47:00.763391   98140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:47:00.774719   98140 system_svc.go:56] duration metric: took 11.367901ms WaitForService to wait for kubelet
	I0916 10:47:00.774748   98140 kubeadm.go:582] duration metric: took 35.323660639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:47:00.774764   98140 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:47:00.960235   98140 request.go:632] Waited for 185.379895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:47:00.960325   98140 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:47:00.960334   98140 round_trippers.go:469] Request Headers:
	I0916 10:47:00.960346   98140 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:00.960359   98140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:00.963530   98140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:47:00.964417   98140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:47:00.964437   98140 node_conditions.go:123] node cpu capacity is 8
	I0916 10:47:00.964448   98140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:47:00.964451   98140 node_conditions.go:123] node cpu capacity is 8
	I0916 10:47:00.964455   98140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:47:00.964458   98140 node_conditions.go:123] node cpu capacity is 8
	I0916 10:47:00.964462   98140 node_conditions.go:105] duration metric: took 189.693713ms to run NodePressure ...
	I0916 10:47:00.964476   98140 start.go:241] waiting for startup goroutines ...
	I0916 10:47:00.964503   98140 start.go:255] writing updated cluster config ...
	I0916 10:47:00.964819   98140 ssh_runner.go:195] Run: rm -f paused
	I0916 10:47:00.971119   98140 out.go:177] * Done! kubectl is now configured to use "ha-107957" cluster and "default" namespace by default
	E0916 10:47:00.972546   98140 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:46:16 ha-107957 crio[682]: time="2024-09-16 10:46:16.813026740Z" level=info msg="Started container" PID=1461 containerID=050176afcaa594cf72662ce7803f6bcf6204f2ee0f79e5048146f8bae72b65db description=kube-system/coredns-7c65d6cfc9-mhp28/coredns id=89bbafeb-5e51-4947-ba91-880e5a938caf name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf60b677a6def2973fc723149be8fab4fabfa9acc4db86ff76c4d91e440dc770
	Sep 16 10:46:46 ha-107957 conmon[1344]: conmon 4f0ae78c48ff662cea47 <ninfo>: container 1370 exited with status 1
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.206521104Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=40edcf1f-3b11-4668-8c14-fd1def478f94 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.206767457Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=40edcf1f-3b11-4668-8c14-fd1def478f94 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.207477424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2c731ad6-9821-4751-8ead-a876f2cd9729 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.207686931Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2c731ad6-9821-4751-8ead-a876f2cd9729 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.208391599Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=86db0659-d043-4803-b0c2-ef9f5ebdc3ef name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.208495748Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.220261767Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ef4babd2719a9aed04acb583af7f33d9d7205a187e4a4948630f698fb0b038a8/merged/etc/passwd: no such file or directory"
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.220296816Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ef4babd2719a9aed04acb583af7f33d9d7205a187e4a4948630f698fb0b038a8/merged/etc/group: no such file or directory"
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.253227784Z" level=info msg="Created container 64771c3f05f46d4ab73b7c66dc8ab97bd12a0ffd5fa81e1a53390bd71f4c3615: kube-system/storage-provisioner/storage-provisioner" id=86db0659-d043-4803-b0c2-ef9f5ebdc3ef name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.253910808Z" level=info msg="Starting container: 64771c3f05f46d4ab73b7c66dc8ab97bd12a0ffd5fa81e1a53390bd71f4c3615" id=bc778bab-16c4-430f-a413-4ffad770d612 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:46:47 ha-107957 crio[682]: time="2024-09-16 10:46:47.259954922Z" level=info msg="Started container" PID=1788 containerID=64771c3f05f46d4ab73b7c66dc8ab97bd12a0ffd5fa81e1a53390bd71f4c3615 description=kube-system/storage-provisioner/storage-provisioner id=bc778bab-16c4-430f-a413-4ffad770d612 name=/runtime.v1.RuntimeService/StartContainer sandboxID=549953506c9a2e1ecfe83c74c71f3cc6f872a37950c2293bf739c6e863980f52
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.217626792Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.221312099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.221365061Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.221389218Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.224717514Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.224746514Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.224758587Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.228278043Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.228302843Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.228314139Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.231477116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:46:57 ha-107957 crio[682]: time="2024-09-16 10:46:57.231504717Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	64771c3f05f46       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago       Running             storage-provisioner       5                   549953506c9a2       storage-provisioner
	050176afcaa59       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   45 seconds ago       Running             coredns                   2                   bf60b677a6def       coredns-7c65d6cfc9-mhp28
	85ddbae819aa0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   45 seconds ago       Running             coredns                   2                   e5ff0da66775f       coredns-7c65d6cfc9-t9xdr
	e2c653e6c536c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   46 seconds ago       Running             busybox                   2                   a788cfd1f23e5       busybox-7dff88458-m2jh6
	4f0ae78c48ff6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   46 seconds ago       Exited              storage-provisioner       4                   549953506c9a2       storage-provisioner
	0b379b57956bd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   46 seconds ago       Running             kindnet-cni               2                   a8b5dbfcc9ef4       kindnet-rwcs2
	89a8793fec734       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   46 seconds ago       Running             kube-proxy                2                   d26495469dd89       kube-proxy-5ctr8
	79a434ad5e21d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Running             kube-scheduler            2                   3978cd8b5cab3       kube-scheduler-ha-107957
	96bcf808af897       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Running             kube-apiserver            3                   79aec277ea6d7       kube-apiserver-ha-107957
	83d8c939e1cd4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Running             kube-controller-manager   5                   eb7229648c934       kube-controller-manager-ha-107957
	dc21e7906d372       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   b9d804446044f       etcd-ha-107957
	72c11f3e42ba5       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   About a minute ago   Running             kube-vip                  2                   ae51dc62225df       kube-vip-ha-107957
	
	
	==> coredns [050176afcaa594cf72662ce7803f6bcf6204f2ee0f79e5048146f8bae72b65db] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32768 - 13586 "HINFO IN 314889262725648119.347566910037394627. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014225293s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[106626103]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:46:16.911) (total time: 30001ms):
	Trace[106626103]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:46:46.912)
	Trace[106626103]: [30.001239481s] [30.001239481s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1731108792]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:46:16.911) (total time: 30001ms):
	Trace[1731108792]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:46:46.912)
	Trace[1731108792]: [30.001278623s] [30.001278623s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[818789763]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:46:16.911) (total time: 30001ms):
	Trace[818789763]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:46:46.912)
	Trace[818789763]: [30.001351813s] [30.001351813s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [85ddbae819aa03881b4b151a0f15e4509cd46122da462cb3d5cb66b8c7ef5b34] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54518 - 35627 "HINFO IN 8423621359955776337.4988234204016665128. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009764107s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[45619372]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:46:16.828) (total time: 30001ms):
	Trace[45619372]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:46:46.829)
	Trace[45619372]: [30.001149316s] [30.001149316s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[287863693]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:46:16.829) (total time: 30000ms):
	Trace[287863693]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:46:46.830)
	Trace[287863693]: [30.000967844s] [30.000967844s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2001325837]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:46:16.828) (total time: 30001ms):
	Trace[2001325837]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:46:46.830)
	Trace[2001325837]: [30.001266433s] [30.001266433s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-107957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:46:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:16 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:16 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:16 +0000   Mon, 16 Sep 2024 10:37:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:16 +0000   Mon, 16 Sep 2024 10:42:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-107957
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd7d122c74ed4252b316da80a5deb118
	  System UUID:                4b3cbb31-41b2-4aeb-852f-1a17b0b6a69f
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m2jh6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 coredns-7c65d6cfc9-mhp28             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m38s
	  kube-system                 coredns-7c65d6cfc9-t9xdr             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m38s
	  kube-system                 etcd-ha-107957                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m43s
	  kube-system                 kindnet-rwcs2                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m38s
	  kube-system                 kube-apiserver-ha-107957             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 kube-controller-manager-ha-107957    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 kube-proxy-5ctr8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kube-system                 kube-scheduler-ha-107957             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 kube-vip-ha-107957                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m16s                  kube-proxy       
	  Normal   Starting                 9m37s                  kube-proxy       
	  Normal   Starting                 45s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    9m43s                  kubelet          Node ha-107957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m43s                  kubelet          Node ha-107957 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m43s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m43s                  kubelet          Node ha-107957 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m39s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   NodeReady                9m27s                  kubelet          Node ha-107957 status is now: NodeReady
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           8m14s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           5m46s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node ha-107957 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 5m14s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node ha-107957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node ha-107957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m44s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           3m41s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           2m51s                  node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)      kubelet          Node ha-107957 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 73s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)      kubelet          Node ha-107957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)      kubelet          Node ha-107957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-107957 event: Registered Node ha-107957 in Controller
	
	
	Name:               ha-107957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_37_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:37:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:47:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:11 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:11 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:11 +0000   Mon, 16 Sep 2024 10:37:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:11 +0000   Mon, 16 Sep 2024 10:38:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-107957-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa66b9b8c2fb4a95afa0f0fab7737a4b
	  System UUID:                15471af5-ad40-4515-bf0c-79f0cc3f164e
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-plmdj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 etcd-ha-107957-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m24s
	  kube-system                 kindnet-sjkjx                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m25s
	  kube-system                 kube-apiserver-ha-107957-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 kube-controller-manager-ha-107957-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 kube-proxy-qtxh9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	  kube-system                 kube-scheduler-ha-107957-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 kube-vip-ha-107957-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m22s                  kube-proxy       
	  Normal   Starting                 6m3s                   kube-proxy       
	  Normal   Starting                 4m35s                  kube-proxy       
	  Normal   Starting                 30s                    kube-proxy       
	  Normal   NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m24s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           8m14s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m21s (x8 over 6m21s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m21s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m21s (x8 over 6m21s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           5m46s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m13s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           4m44s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           3m41s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           2m51s                  node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   Starting                 72s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node ha-107957-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node ha-107957-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x7 over 72s)      kubelet          Node ha-107957-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-107957-m02 event: Registered Node ha-107957-m02 in Controller
	
	
	Name:               ha-107957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-107957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-107957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-107957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:46:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:32 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:32 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:32 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:32 +0000   Mon, 16 Sep 2024 10:44:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-107957-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 b21fc4e997a7464aa7fdcd1054f13226
	  System UUID:                85f6a07b-6b9f-43fc-98ae-305e46935522
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5jwbv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kindnet-4lkzl              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m12s
	  kube-system                 kube-proxy-hm8zn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 21s                    kube-proxy       
	  Normal   Starting                 7m11s                  kube-proxy       
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m12s (x2 over 7m12s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m12s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   Starting                 7m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m12s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    7m12s (x2 over 7m12s)  kubelet          Node ha-107957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m12s (x2 over 7m12s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m9s                   node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           7m9s                   node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   NodeReady                7m                     kubelet          Node ha-107957-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m46s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           4m44s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   NodeNotReady             4m3s                   node-controller  Node ha-107957-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m41s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   RegisteredNode           2m51s                  node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m33s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     2m27s (x7 over 2m33s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m21s (x8 over 2m33s)  kubelet          Node ha-107957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s (x8 over 2m33s)  kubelet          Node ha-107957-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           50s                    node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   Starting                 42s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 42s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           39s                    node-controller  Node ha-107957-m04 event: Registered Node ha-107957-m04 in Controller
	  Normal   NodeHasSufficientPID     36s (x7 over 42s)      kubelet          Node ha-107957-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  30s (x8 over 42s)      kubelet          Node ha-107957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    30s (x8 over 42s)      kubelet          Node ha-107957-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[  +0.095980] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004016] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +1.915832] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +4.031681] net_ratelimit: 5 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000002] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.255941] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000001] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004022] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +7.931402] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000002] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004224] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.251741] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000008] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [dc21e7906d3728ec7e5bc4d9dbfd8ff92564cb39743ec378f8366ede14a093f9] <==
	{"level":"info","ts":"2024-09-16T10:46:08.541681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 5"}
	{"level":"info","ts":"2024-09-16T10:46:08.541735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 5"}
	{"level":"info","ts":"2024-09-16T10:46:08.541750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2024-09-16T10:46:08.541765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc [logterm: 5, index: 2936] sent MsgPreVote request to b0ea00fb31119a01 at term 5"}
	{"level":"info","ts":"2024-09-16T10:46:08.542351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from b0ea00fb31119a01 at term 5"}
	{"level":"info","ts":"2024-09-16T10:46:08.542381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-16T10:46:08.542395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 6"}
	{"level":"info","ts":"2024-09-16T10:46:08.542404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 6"}
	{"level":"info","ts":"2024-09-16T10:46:08.542417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc [logterm: 5, index: 2936] sent MsgVote request to b0ea00fb31119a01 at term 6"}
	{"level":"info","ts":"2024-09-16T10:46:08.545993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from b0ea00fb31119a01 at term 6"}
	{"level":"info","ts":"2024-09-16T10:46:08.546028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-16T10:46:08.546042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 6"}
	{"level":"info","ts":"2024-09-16T10:46:08.546051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 6"}
	{"level":"warn","ts":"2024-09-16T10:46:08.546640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.442718653s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-16T10:46:08.546700Z","caller":"traceutil/trace.go:171","msg":"trace[1421501453] range","detail":"{range_begin:; range_end:; }","duration":"3.443292563s","start":"2024-09-16T10:46:05.103394Z","end":"2024-09-16T10:46:08.546686Z","steps":["trace[1421501453] 'agreement among raft nodes before linearized reading'  (duration: 3.442715638s)"],"step_count":1}
	{"level":"error","ts":"2024-09-16T10:46:08.546744Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-16T10:46:08.549176Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-107957 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:46:08.549217Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:46:08.549226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:46:08.549371Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:46:08.549407Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:46:08.550306Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:46:08.550506Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:46:08.551295Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:46:08.551501Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 10:47:02 up 29 min,  0 users,  load average: 1.10, 0.98, 0.68
	Linux ha-107957 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0b379b57956bd5719f3d1530dec41f4ff8b06448098985d3cf9cb36170642fde] <==
	I0916 10:46:47.218128       1 trace.go:236] Trace[342352146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 10:46:17.217) (total time: 30000ms):
	Trace[342352146]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:46:47.218)
	Trace[342352146]: [30.000651696s] [30.000651696s] END
	E0916 10:46:47.218142       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 10:46:47.218140       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 10:46:47.218167       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 10:46:47.218223       1 trace.go:236] Trace[707820789]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 10:46:17.217) (total time: 30000ms):
	Trace[707820789]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:46:47.218)
	Trace[707820789]: [30.000353777s] [30.000353777s] END
	I0916 10:46:47.218234       1 trace.go:236] Trace[905146532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 10:46:17.217) (total time: 30000ms):
	Trace[905146532]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:46:47.218)
	Trace[905146532]: [30.000307954s] [30.000307954s] END
	E0916 10:46:47.218240       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 10:46:47.218247       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 10:46:48.818762       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:46:48.818838       1 metrics.go:61] Registering metrics
	I0916 10:46:48.818917       1 controller.go:374] Syncing nftables rules
	I0916 10:46:57.216938       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:46:57.216985       1 main.go:322] Node ha-107957-m04 has CIDR [10.244.3.0/24] 
	I0916 10:46:57.217243       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0916 10:46:57.217371       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:46:57.217388       1 main.go:299] handling current node
	I0916 10:46:57.219795       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:46:57.219818       1 main.go:322] Node ha-107957-m02 has CIDR [10.244.1.0/24] 
	I0916 10:46:57.219913       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [96bcf808af897e323066b506959fbcec12466ec52806c56e673803862a87f3ab] <==
	I0916 10:46:09.515477       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0916 10:46:09.514129       1 establishing_controller.go:81] Starting EstablishingController
	I0916 10:46:09.594223       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:46:09.594661       1 policy_source.go:224] refreshing policies
	I0916 10:46:09.600524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:46:09.613814       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:46:09.613860       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:46:09.613870       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:46:09.613876       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:46:09.613882       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:46:09.613969       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:46:09.614679       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:46:09.614679       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:46:09.614702       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:46:09.615036       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:46:09.615087       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:46:09.615741       1 shared_informer.go:320] Caches are synced for configmaps
	W0916 10:46:09.619657       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0916 10:46:09.620844       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:46:09.621780       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:46:09.622548       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:46:09.626512       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 10:46:09.628531       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 10:46:10.518630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:46:10.739749       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-controller-manager [83d8c939e1cd4a173ef22b756a1f472185cdad8b052869c10a737fabc3ff1f5c] <==
	I0916 10:46:41.176512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.512µs"
	I0916 10:46:42.244558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.864179ms"
	I0916 10:46:42.244664       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.994µs"
	E0916 10:46:52.686242       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:46:52.686270       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:46:52.686282       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:46:52.686289       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	E0916 10:46:52.686296       1 gc_controller.go:151] "Failed to get node" err="node \"ha-107957-m03\" not found" logger="pod-garbage-collector-controller" node="ha-107957-m03"
	I0916 10:46:52.698443       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-107957-m03"
	I0916 10:46:52.718094       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-107957-m03"
	I0916 10:46:52.718130       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-107957-m03"
	I0916 10:46:52.735510       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-107957-m03"
	I0916 10:46:52.735559       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-107957-m03"
	I0916 10:46:52.754001       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-107957-m03"
	I0916 10:46:52.754107       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-107957-m03"
	I0916 10:46:52.771629       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-107957-m03"
	I0916 10:46:52.771661       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-f2scr"
	I0916 10:46:52.789233       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-f2scr"
	I0916 10:46:52.789266       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rcsxv"
	I0916 10:46:52.809790       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rcsxv"
	I0916 10:46:52.809930       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-107957-m03"
	I0916 10:46:52.827531       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-107957-m03"
	I0916 10:46:56.444690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.429357ms"
	I0916 10:46:56.454349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.60347ms"
	I0916 10:46:56.455606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="166.118µs"
	
	
	==> kube-proxy [89a8793fec734e072da35fafcb4f7b0dba60a552284c187c5cbce2ad1dfde214] <==
	I0916 10:46:16.897741       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:46:17.001780       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:46:17.001862       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:46:17.024199       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:46:17.024259       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:46:17.026369       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:46:17.026821       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:46:17.026856       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:46:17.030654       1 config.go:199] "Starting service config controller"
	I0916 10:46:17.030661       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:46:17.030684       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:46:17.030691       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:46:17.030726       1 config.go:328] "Starting node config controller"
	I0916 10:46:17.030733       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:46:17.131322       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:46:17.131359       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:46:17.131332       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [79a434ad5e21dfce08a1275df0ed9c27f0363ef21cc7ac35382e77da452cd3ad] <==
	W0916 10:46:09.599666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:46:09.599735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:46:09.600024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:46:09.600094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:46:09.600245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0916 10:46:09.600301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:46:09.600332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:46:09.600309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:46:09.604915       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:46:09.605156       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:46:09.605165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0916 10:46:09.605309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 10:46:09.605410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:46:09.605426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:46:09.605304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:46:09.605725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:46:09.605236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:46:09.605774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:46:09.605346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:46:09.605807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:46:09.605453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:46:09.605840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:46:09.605468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:46:09.605872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:46:10.636661       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:46:16 ha-107957 kubelet[838]: E0916 10:46:16.150849     838 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ha-107957\" already exists" pod="kube-system/kube-apiserver-ha-107957"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.154316     838 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.170680     838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/df0e02e3-2a14-48fb-8f07-47dd836c8ea4-cni-cfg\") pod \"kindnet-rwcs2\" (UID: \"df0e02e3-2a14-48fb-8f07-47dd836c8ea4\") " pod="kube-system/kindnet-rwcs2"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.170764     838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df0e02e3-2a14-48fb-8f07-47dd836c8ea4-lib-modules\") pod \"kindnet-rwcs2\" (UID: \"df0e02e3-2a14-48fb-8f07-47dd836c8ea4\") " pod="kube-system/kindnet-rwcs2"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.170842     838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df0e02e3-2a14-48fb-8f07-47dd836c8ea4-xtables-lock\") pod \"kindnet-rwcs2\" (UID: \"df0e02e3-2a14-48fb-8f07-47dd836c8ea4\") " pod="kube-system/kindnet-rwcs2"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.170915     838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae19e764-5020-48d7-9e34-adc329e8c502-lib-modules\") pod \"kube-proxy-5ctr8\" (UID: \"ae19e764-5020-48d7-9e34-adc329e8c502\") " pod="kube-system/kube-proxy-5ctr8"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.170944     838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7b4f4924-ccac-42ba-983c-5ac7e0696277-tmp\") pod \"storage-provisioner\" (UID: \"7b4f4924-ccac-42ba-983c-5ac7e0696277\") " pod="kube-system/storage-provisioner"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.170963     838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae19e764-5020-48d7-9e34-adc329e8c502-xtables-lock\") pod \"kube-proxy-5ctr8\" (UID: \"ae19e764-5020-48d7-9e34-adc329e8c502\") " pod="kube-system/kube-proxy-5ctr8"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.193977     838 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.947779     838 kubelet_node_status.go:72] "Attempting to register node" node="ha-107957"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.957604     838 kubelet_node_status.go:111] "Node was previously registered" node="ha-107957"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.957735     838 kubelet_node_status.go:75] "Successfully registered node" node="ha-107957"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.957771     838 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:46:16 ha-107957 kubelet[838]: I0916 10:46:16.958602     838 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:46:19 ha-107957 kubelet[838]: E0916 10:46:19.103996     838 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483579103812311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:19 ha-107957 kubelet[838]: E0916 10:46:19.104035     838 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483579103812311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:29 ha-107957 kubelet[838]: E0916 10:46:29.105263     838 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483589105101940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:29 ha-107957 kubelet[838]: E0916 10:46:29.105309     838 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483589105101940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:39 ha-107957 kubelet[838]: E0916 10:46:39.106388     838 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483599106205389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:39 ha-107957 kubelet[838]: E0916 10:46:39.106422     838 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483599106205389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:47 ha-107957 kubelet[838]: I0916 10:46:47.206103     838 scope.go:117] "RemoveContainer" containerID="4f0ae78c48ff662cea479dd5c49a572fa43c1d78ed6ca0917af5c94293d916a1"
	Sep 16 10:46:49 ha-107957 kubelet[838]: E0916 10:46:49.107512     838 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483609107332538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:49 ha-107957 kubelet[838]: E0916 10:46:49.107557     838 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483609107332538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:59 ha-107957 kubelet[838]: E0916 10:46:59.108674     838 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483619108479713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:46:59 ha-107957 kubelet[838]: E0916 10:46:59.108713     838 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483619108479713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147132,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-107957 -n ha-107957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (454.252µs)
helpers_test.go:263: kubectl --context ha-107957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/RestartCluster (81.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-026168 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-026168 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": fork/exec /usr/local/bin/kubectl: exec format error (499.657µs)
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-026168 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": fork/exec /usr/local/bin/kubectl: exec format error
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-026168 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-026168
helpers_test.go:235: (dbg) docker inspect multinode-026168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74",
	        "Created": "2024-09-16T10:53:21.752929602Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 151054,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:53:21.869714559Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hostname",
	        "HostsPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hosts",
	        "LogPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74-json.log",
	        "Name": "/multinode-026168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-026168:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-026168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-026168",
	                "Source": "/var/lib/docker/volumes/multinode-026168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-026168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-026168",
	                "name.minikube.sigs.k8s.io": "multinode-026168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7af9c28e5e64078796e260ddd459f762670a6f4dbc2efb9ece79d12ebff981c",
	            "SandboxKey": "/var/run/docker/netns/b7af9c28e5e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-026168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a5a173559814a989877e5b7826f3cf7f4df5f065fe1cdcc6350cf486bc64e678",
	                    "EndpointID": "4f9d887b0da816276a4cc9cb835cc6812b15d59e3eb718896f4150bf9e5d1a47",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-026168",
	                        "23ba806c0524"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-026168 -n multinode-026168
helpers_test.go:244: <<< TestMultiNode/serial/MultiNodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 logs -n 25: (1.24908376s)
helpers_test.go:252: TestMultiNode/serial/MultiNodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-085030 ssh -- ls                    | mount-start-2-085030 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-070941                           | mount-start-1-070941 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-085030 ssh -- ls                    | mount-start-2-085030 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-085030                           | mount-start-2-085030 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	| start   | -p mount-start-2-085030                           | mount-start-2-085030 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	| ssh     | mount-start-2-085030 ssh -- ls                    | mount-start-2-085030 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-085030                           | mount-start-2-085030 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	| delete  | -p mount-start-1-070941                           | mount-start-1-070941 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	| start   | -p multinode-026168                               | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:54 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- apply -f                   | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- rollout                    | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- get pods -o                | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- get pods -o                | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-qt9rx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-z8csk --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-qt9rx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-z8csk --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-qt9rx -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-z8csk -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- get pods -o                | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-qt9rx                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-qt9rx -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.67.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-z8csk                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-026168 -- exec                       | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:54 UTC |
	|         | busybox-7dff88458-z8csk -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.67.1                         |                      |         |         |                     |                     |
	| node    | add -p multinode-026168 -v 3                      | multinode-026168     | jenkins | v1.34.0 | 16 Sep 24 10:54 UTC | 16 Sep 24 10:55 UTC |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:53:16
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:53:16.240635  150386 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:16.240738  150386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:16.240743  150386 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:16.240747  150386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:16.240929  150386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:53:16.241499  150386 out.go:352] Setting JSON to false
	I0916 10:53:16.242411  150386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2136,"bootTime":1726481860,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:53:16.242505  150386 start.go:139] virtualization: kvm guest
	I0916 10:53:16.245004  150386 out.go:177] * [multinode-026168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:53:16.246642  150386 notify.go:220] Checking for updates...
	I0916 10:53:16.246654  150386 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:53:16.248057  150386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:53:16.249745  150386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:16.251336  150386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:53:16.252776  150386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:53:16.254106  150386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:53:16.255610  150386 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:53:16.277663  150386 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:53:16.277759  150386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:53:16.331858  150386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:53:16.322223407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:53:16.331964  150386 docker.go:318] overlay module found
	I0916 10:53:16.334087  150386 out.go:177] * Using the docker driver based on user configuration
	I0916 10:53:16.335429  150386 start.go:297] selected driver: docker
	I0916 10:53:16.335446  150386 start.go:901] validating driver "docker" against <nil>
	I0916 10:53:16.335457  150386 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:53:16.336234  150386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:53:16.383688  150386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:53:16.373943804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:53:16.383844  150386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:53:16.384051  150386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:53:16.385893  150386 out.go:177] * Using Docker driver with root privileges
	I0916 10:53:16.387506  150386 cni.go:84] Creating CNI manager for ""
	I0916 10:53:16.387550  150386 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:53:16.387559  150386 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:53:16.387651  150386 start.go:340] cluster config:
	{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:16.389477  150386 out.go:177] * Starting "multinode-026168" primary control-plane node in "multinode-026168" cluster
	I0916 10:53:16.391199  150386 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:53:16.393047  150386 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:53:16.394534  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:53:16.394579  150386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:53:16.394590  150386 cache.go:56] Caching tarball of preloaded images
	I0916 10:53:16.394653  150386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:53:16.394679  150386 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:53:16.394687  150386 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:53:16.395028  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:53:16.395053  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json: {Name:mk91cb70ae479e3389c4ae23dab5870b80a4399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:53:16.415170  150386 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:53:16.415191  150386 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:53:16.415291  150386 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:53:16.415312  150386 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:53:16.415318  150386 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:53:16.415328  150386 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:53:16.415335  150386 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:53:16.416531  150386 image.go:273] response: 
	I0916 10:53:16.473943  150386 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:53:16.474010  150386 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:53:16.474053  150386 start.go:360] acquireMachinesLock for multinode-026168: {Name:mk1016c8f1a43c2d6030796baf01aa33f86316e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:53:16.474190  150386 start.go:364] duration metric: took 109.669µs to acquireMachinesLock for "multinode-026168"
	I0916 10:53:16.474220  150386 start.go:93] Provisioning new machine with config: &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:53:16.474334  150386 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:53:16.476233  150386 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:53:16.476541  150386 start.go:159] libmachine.API.Create for "multinode-026168" (driver="docker")
	I0916 10:53:16.476574  150386 client.go:168] LocalClient.Create starting
	I0916 10:53:16.476652  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:53:16.476695  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:16.476712  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:16.476764  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:53:16.476799  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:16.476815  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:16.477238  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:53:16.494854  150386 cli_runner.go:211] docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:53:16.494927  150386 network_create.go:284] running [docker network inspect multinode-026168] to gather additional debugging logs...
	I0916 10:53:16.494974  150386 cli_runner.go:164] Run: docker network inspect multinode-026168
	W0916 10:53:16.515079  150386 cli_runner.go:211] docker network inspect multinode-026168 returned with exit code 1
	I0916 10:53:16.515125  150386 network_create.go:287] error running [docker network inspect multinode-026168]: docker network inspect multinode-026168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-026168 not found
	I0916 10:53:16.515144  150386 network_create.go:289] output of [docker network inspect multinode-026168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-026168 not found
	
	** /stderr **
	I0916 10:53:16.515299  150386 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:53:16.535537  150386 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 10:53:16.535947  150386 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 10:53:16.536373  150386 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018650a0}
	I0916 10:53:16.536394  150386 network_create.go:124] attempt to create docker network multinode-026168 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0916 10:53:16.536435  150386 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-026168 multinode-026168
	I0916 10:53:16.601989  150386 network_create.go:108] docker network multinode-026168 192.168.67.0/24 created
	I0916 10:53:16.602030  150386 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-026168" container
	I0916 10:53:16.602084  150386 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:53:16.619330  150386 cli_runner.go:164] Run: docker volume create multinode-026168 --label name.minikube.sigs.k8s.io=multinode-026168 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:53:16.637521  150386 oci.go:103] Successfully created a docker volume multinode-026168
	I0916 10:53:16.637606  150386 cli_runner.go:164] Run: docker run --rm --name multinode-026168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168 --entrypoint /usr/bin/test -v multinode-026168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:53:17.150042  150386 oci.go:107] Successfully prepared a docker volume multinode-026168
	I0916 10:53:17.150090  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:53:17.150115  150386 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:53:17.150171  150386 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:53:21.687566  150386 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.537347589s)
	I0916 10:53:21.687602  150386 kic.go:203] duration metric: took 4.537484242s to extract preloaded images to volume ...
	W0916 10:53:21.687727  150386 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:53:21.687818  150386 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:53:21.736769  150386 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-026168 --name multinode-026168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-026168 --network multinode-026168 --ip 192.168.67.2 --volume multinode-026168:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:53:22.041826  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Running}}
	I0916 10:53:22.060023  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:22.080328  150386 cli_runner.go:164] Run: docker exec multinode-026168 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:53:22.124480  150386 oci.go:144] the created container "multinode-026168" has a running status.
	I0916 10:53:22.124520  150386 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa...
	I0916 10:53:22.429223  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:53:22.429266  150386 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:53:22.452062  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:22.469125  150386 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:53:22.469147  150386 kic_runner.go:114] Args: [docker exec --privileged multinode-026168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:53:22.511759  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:22.531129  150386 machine.go:93] provisionDockerMachine start ...
	I0916 10:53:22.531206  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:22.551545  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:22.551837  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:22.551854  150386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:53:22.692713  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:53:22.692742  150386 ubuntu.go:169] provisioning hostname "multinode-026168"
	I0916 10:53:22.692805  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:22.712078  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:22.712291  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:22.712311  150386 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168 && echo "multinode-026168" | sudo tee /etc/hostname
	I0916 10:53:22.856873  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:53:22.856942  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:22.873834  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:22.874011  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:22.874030  150386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:53:23.005826  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:53:23.005858  150386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:53:23.005903  150386 ubuntu.go:177] setting up certificates
	I0916 10:53:23.005917  150386 provision.go:84] configureAuth start
	I0916 10:53:23.005973  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:53:23.022869  150386 provision.go:143] copyHostCerts
	I0916 10:53:23.022905  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:53:23.022933  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:53:23.022940  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:53:23.023003  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:53:23.023075  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:53:23.023095  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:53:23.023103  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:53:23.023128  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:53:23.023175  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:53:23.023196  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:53:23.023202  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:53:23.023222  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:53:23.023270  150386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-026168]
	I0916 10:53:23.137406  150386 provision.go:177] copyRemoteCerts
	I0916 10:53:23.137473  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:53:23.137511  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.159463  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.258647  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:53:23.258716  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:53:23.281767  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:53:23.281827  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 10:53:23.305959  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:53:23.306027  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:53:23.328819  150386 provision.go:87] duration metric: took 322.885907ms to configureAuth
	I0916 10:53:23.328850  150386 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:53:23.329034  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:53:23.329174  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.346526  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:23.346889  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:23.346919  150386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:53:23.566448  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:53:23.566472  150386 machine.go:96] duration metric: took 1.035323474s to provisionDockerMachine
	I0916 10:53:23.566482  150386 client.go:171] duration metric: took 7.089900982s to LocalClient.Create
	I0916 10:53:23.566496  150386 start.go:167] duration metric: took 7.089959092s to libmachine.API.Create "multinode-026168"
	I0916 10:53:23.566503  150386 start.go:293] postStartSetup for "multinode-026168" (driver="docker")
	I0916 10:53:23.566511  150386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:53:23.566575  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:53:23.566612  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.583611  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.679163  150386 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:53:23.682571  150386 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:53:23.682594  150386 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:53:23.682600  150386 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:53:23.682606  150386 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:53:23.682613  150386 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:53:23.682617  150386 command_runner.go:130] > ID=ubuntu
	I0916 10:53:23.682620  150386 command_runner.go:130] > ID_LIKE=debian
	I0916 10:53:23.682625  150386 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:53:23.682630  150386 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:53:23.682637  150386 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:53:23.682644  150386 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:53:23.682651  150386 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:53:23.682706  150386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:53:23.682730  150386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:53:23.682738  150386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:53:23.682747  150386 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:53:23.682759  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:53:23.682817  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:53:23.682898  150386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:53:23.682912  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:53:23.683000  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:53:23.691650  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:53:23.713983  150386 start.go:296] duration metric: took 147.465039ms for postStartSetup
	I0916 10:53:23.714319  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:53:23.731359  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:53:23.731624  150386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:53:23.731662  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.748432  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.842021  150386 command_runner.go:130] > 30%
	I0916 10:53:23.842224  150386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:53:23.846641  150386 command_runner.go:130] > 205G
	I0916 10:53:23.846906  150386 start.go:128] duration metric: took 7.372555552s to createHost
	I0916 10:53:23.846930  150386 start.go:83] releasing machines lock for "multinode-026168", held for 7.372726341s
	I0916 10:53:23.847004  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:53:23.864775  150386 ssh_runner.go:195] Run: cat /version.json
	I0916 10:53:23.864823  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.864873  150386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:53:23.864929  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.883138  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.883396  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:24.044843  150386 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:53:24.047408  150386 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:53:24.047552  150386 ssh_runner.go:195] Run: systemctl --version
	I0916 10:53:24.051947  150386 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:53:24.051990  150386 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:53:24.052058  150386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:53:24.190794  150386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:53:24.194808  150386 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:53:24.194840  150386 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:53:24.194848  150386 command_runner.go:130] > Device: 37h/55d	Inode: 535096      Links: 1
	I0916 10:53:24.194866  150386 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:53:24.194875  150386 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:53:24.194884  150386 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:53:24.194891  150386 command_runner.go:130] > Change: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:53:24.194896  150386 command_runner.go:130] >  Birth: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:53:24.195105  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:53:24.213521  150386 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:53:24.213593  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:53:24.240626  150386 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:53:24.240701  150386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:53:24.240708  150386 start.go:495] detecting cgroup driver to use...
	I0916 10:53:24.240743  150386 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:53:24.240796  150386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:53:24.254870  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:53:24.265498  150386 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:53:24.265557  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:53:24.278044  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:53:24.291857  150386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:53:24.369500  150386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:53:24.447658  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:53:24.447701  150386 docker.go:233] disabling docker service ...
	I0916 10:53:24.447749  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:53:24.465271  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:53:24.475865  150386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:53:24.555564  150386 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:53:24.555651  150386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:53:24.636251  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:53:24.636331  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:53:24.647535  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:53:24.663493  150386 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:53:24.663534  150386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:53:24.663571  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.673350  150386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:53:24.673417  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.683157  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.692864  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.702168  150386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:53:24.710521  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.719794  150386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.734475  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.743952  150386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:53:24.751435  150386 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:53:24.751507  150386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:53:24.758780  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:53:24.835644  150386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:53:24.943612  150386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:53:24.943708  150386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:53:24.947392  150386 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:53:24.947415  150386 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:53:24.947421  150386 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0916 10:53:24.947428  150386 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:53:24.947434  150386 command_runner.go:130] > Access: 2024-09-16 10:53:24.926948060 +0000
	I0916 10:53:24.947439  150386 command_runner.go:130] > Modify: 2024-09-16 10:53:24.926948060 +0000
	I0916 10:53:24.947444  150386 command_runner.go:130] > Change: 2024-09-16 10:53:24.926948060 +0000
	I0916 10:53:24.947448  150386 command_runner.go:130] >  Birth: -
	I0916 10:53:24.947468  150386 start.go:563] Will wait 60s for crictl version
	I0916 10:53:24.947505  150386 ssh_runner.go:195] Run: which crictl
	I0916 10:53:24.950865  150386 command_runner.go:130] > /usr/bin/crictl
	I0916 10:53:24.950944  150386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:53:24.983555  150386 command_runner.go:130] > Version:  0.1.0
	I0916 10:53:24.983579  150386 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:53:24.983585  150386 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:53:24.983590  150386 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:53:24.983635  150386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:53:24.983693  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:53:25.018244  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:53:25.018270  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:53:25.018277  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:53:25.018281  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:53:25.018287  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:53:25.018291  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:53:25.018300  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:53:25.018304  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:53:25.018309  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:53:25.018317  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:53:25.018321  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:53:25.018325  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:53:25.018390  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:53:25.050200  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:53:25.050224  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:53:25.050231  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:53:25.050236  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:53:25.050242  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:53:25.050246  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:53:25.050251  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:53:25.050255  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:53:25.050260  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:53:25.050268  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:53:25.050272  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:53:25.050276  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:53:25.054319  150386 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:53:25.055860  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:53:25.072765  150386 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:53:25.076270  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:53:25.086467  150386 kubeadm.go:883] updating cluster {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:53:25.086594  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:53:25.086643  150386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:53:25.147473  150386 command_runner.go:130] > {
	I0916 10:53:25.147502  150386 command_runner.go:130] >   "images": [
	I0916 10:53:25.147515  150386 command_runner.go:130] >     {
	I0916 10:53:25.147528  150386 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:53:25.147537  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147548  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:53:25.147562  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147568  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147579  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:53:25.147589  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:53:25.147596  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147602  150386 command_runner.go:130] >       "size": "87190579",
	I0916 10:53:25.147608  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.147616  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.147627  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147634  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147638  150386 command_runner.go:130] >     },
	I0916 10:53:25.147642  150386 command_runner.go:130] >     {
	I0916 10:53:25.147651  150386 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:53:25.147658  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147664  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:53:25.147670  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147675  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147685  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:53:25.147695  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:53:25.147702  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147711  150386 command_runner.go:130] >       "size": "31470524",
	I0916 10:53:25.147719  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.147723  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.147730  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147734  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147739  150386 command_runner.go:130] >     },
	I0916 10:53:25.147742  150386 command_runner.go:130] >     {
	I0916 10:53:25.147753  150386 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:53:25.147761  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147766  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:53:25.147772  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147779  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147789  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:53:25.147799  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:53:25.147807  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147815  150386 command_runner.go:130] >       "size": "63273227",
	I0916 10:53:25.147820  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.147827  150386 command_runner.go:130] >       "username": "nonroot",
	I0916 10:53:25.147832  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147839  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147844  150386 command_runner.go:130] >     },
	I0916 10:53:25.147850  150386 command_runner.go:130] >     {
	I0916 10:53:25.147857  150386 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:53:25.147863  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147869  150386 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:53:25.147876  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147881  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147890  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:53:25.147903  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:53:25.147910  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147915  150386 command_runner.go:130] >       "size": "149009664",
	I0916 10:53:25.147921  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.147925  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.147930  150386 command_runner.go:130] >       },
	I0916 10:53:25.147936  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.147941  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147947  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147951  150386 command_runner.go:130] >     },
	I0916 10:53:25.147955  150386 command_runner.go:130] >     {
	I0916 10:53:25.147962  150386 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:53:25.147968  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147974  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:53:25.147980  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147984  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147994  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:53:25.148004  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:53:25.148012  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148019  150386 command_runner.go:130] >       "size": "95237600",
	I0916 10:53:25.148023  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148029  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.148033  150386 command_runner.go:130] >       },
	I0916 10:53:25.148040  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148045  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148053  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148057  150386 command_runner.go:130] >     },
	I0916 10:53:25.148063  150386 command_runner.go:130] >     {
	I0916 10:53:25.148070  150386 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:53:25.148077  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148084  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:53:25.148091  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148096  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148106  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:53:25.148116  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:53:25.148122  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148127  150386 command_runner.go:130] >       "size": "89437508",
	I0916 10:53:25.148134  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148138  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.148144  150386 command_runner.go:130] >       },
	I0916 10:53:25.148148  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148155  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148159  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148165  150386 command_runner.go:130] >     },
	I0916 10:53:25.148169  150386 command_runner.go:130] >     {
	I0916 10:53:25.148176  150386 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:53:25.148182  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148188  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:53:25.148194  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148199  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148208  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:53:25.148217  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:53:25.148224  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148228  150386 command_runner.go:130] >       "size": "92733849",
	I0916 10:53:25.148234  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.148239  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148245  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148250  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148265  150386 command_runner.go:130] >     },
	I0916 10:53:25.148268  150386 command_runner.go:130] >     {
	I0916 10:53:25.148274  150386 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:53:25.148278  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148283  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:53:25.148287  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148290  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148304  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:53:25.148312  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:53:25.148315  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148319  150386 command_runner.go:130] >       "size": "68420934",
	I0916 10:53:25.148323  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148327  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.148330  150386 command_runner.go:130] >       },
	I0916 10:53:25.148334  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148338  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148342  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148349  150386 command_runner.go:130] >     },
	I0916 10:53:25.148353  150386 command_runner.go:130] >     {
	I0916 10:53:25.148362  150386 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:53:25.148369  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148374  150386 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:53:25.148380  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148385  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148394  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:53:25.148403  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:53:25.148409  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148414  150386 command_runner.go:130] >       "size": "742080",
	I0916 10:53:25.148420  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148425  150386 command_runner.go:130] >         "value": "65535"
	I0916 10:53:25.148431  150386 command_runner.go:130] >       },
	I0916 10:53:25.148436  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148442  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148448  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148454  150386 command_runner.go:130] >     }
	I0916 10:53:25.148458  150386 command_runner.go:130] >   ]
	I0916 10:53:25.148464  150386 command_runner.go:130] > }
	I0916 10:53:25.148642  150386 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:53:25.148655  150386 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:53:25.148705  150386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:53:25.181514  150386 command_runner.go:130] > {
	I0916 10:53:25.181541  150386 command_runner.go:130] >   "images": [
	I0916 10:53:25.181546  150386 command_runner.go:130] >     {
	I0916 10:53:25.181558  150386 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:53:25.181564  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.181572  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:53:25.181577  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181582  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.181596  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:53:25.181607  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:53:25.181612  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181619  150386 command_runner.go:130] >       "size": "87190579",
	I0916 10:53:25.181626  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.181633  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.181649  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.181660  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.181668  150386 command_runner.go:130] >     },
	I0916 10:53:25.181676  150386 command_runner.go:130] >     {
	I0916 10:53:25.181687  150386 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:53:25.181696  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.181706  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:53:25.181714  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181722  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.181737  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:53:25.181753  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:53:25.181761  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181773  150386 command_runner.go:130] >       "size": "31470524",
	I0916 10:53:25.181783  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.181792  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.181799  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.181809  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.181819  150386 command_runner.go:130] >     },
	I0916 10:53:25.181827  150386 command_runner.go:130] >     {
	I0916 10:53:25.181844  150386 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:53:25.181853  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.181863  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:53:25.181872  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181879  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.181894  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:53:25.181909  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:53:25.181917  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181924  150386 command_runner.go:130] >       "size": "63273227",
	I0916 10:53:25.181933  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.181941  150386 command_runner.go:130] >       "username": "nonroot",
	I0916 10:53:25.181951  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.181961  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.181967  150386 command_runner.go:130] >     },
	I0916 10:53:25.181974  150386 command_runner.go:130] >     {
	I0916 10:53:25.181984  150386 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:53:25.181993  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182001  150386 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:53:25.182007  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182015  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182027  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:53:25.182046  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:53:25.182055  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182061  150386 command_runner.go:130] >       "size": "149009664",
	I0916 10:53:25.182070  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182078  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182086  150386 command_runner.go:130] >       },
	I0916 10:53:25.182095  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182104  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182113  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182121  150386 command_runner.go:130] >     },
	I0916 10:53:25.182132  150386 command_runner.go:130] >     {
	I0916 10:53:25.182145  150386 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:53:25.182152  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182164  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:53:25.182173  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182183  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182198  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:53:25.182215  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:53:25.182223  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182230  150386 command_runner.go:130] >       "size": "95237600",
	I0916 10:53:25.182237  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182246  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182255  150386 command_runner.go:130] >       },
	I0916 10:53:25.182262  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182271  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182279  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182287  150386 command_runner.go:130] >     },
	I0916 10:53:25.182294  150386 command_runner.go:130] >     {
	I0916 10:53:25.182308  150386 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:53:25.182317  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182327  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:53:25.182336  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182343  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182359  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:53:25.182375  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:53:25.182383  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182389  150386 command_runner.go:130] >       "size": "89437508",
	I0916 10:53:25.182398  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182406  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182414  150386 command_runner.go:130] >       },
	I0916 10:53:25.182421  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182430  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182437  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182445  150386 command_runner.go:130] >     },
	I0916 10:53:25.182451  150386 command_runner.go:130] >     {
	I0916 10:53:25.182463  150386 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:53:25.182472  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182480  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:53:25.182489  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182497  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182512  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:53:25.182526  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:53:25.182534  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182541  150386 command_runner.go:130] >       "size": "92733849",
	I0916 10:53:25.182551  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.182560  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182571  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182580  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182586  150386 command_runner.go:130] >     },
	I0916 10:53:25.182594  150386 command_runner.go:130] >     {
	I0916 10:53:25.182603  150386 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:53:25.182609  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182617  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:53:25.182626  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182633  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182656  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:53:25.182670  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:53:25.182676  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182683  150386 command_runner.go:130] >       "size": "68420934",
	I0916 10:53:25.182690  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182700  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182708  150386 command_runner.go:130] >       },
	I0916 10:53:25.182715  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182723  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182733  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182740  150386 command_runner.go:130] >     },
	I0916 10:53:25.182750  150386 command_runner.go:130] >     {
	I0916 10:53:25.182764  150386 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:53:25.182774  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182784  150386 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:53:25.182792  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182800  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182813  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:53:25.182828  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:53:25.182836  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182855  150386 command_runner.go:130] >       "size": "742080",
	I0916 10:53:25.182866  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182875  150386 command_runner.go:130] >         "value": "65535"
	I0916 10:53:25.182882  150386 command_runner.go:130] >       },
	I0916 10:53:25.182889  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182900  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182910  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182917  150386 command_runner.go:130] >     }
	I0916 10:53:25.182925  150386 command_runner.go:130] >   ]
	I0916 10:53:25.182933  150386 command_runner.go:130] > }
	I0916 10:53:25.183047  150386 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:53:25.183060  150386 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:53:25.183070  150386 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 crio true true} ...
	I0916 10:53:25.183176  150386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:53:25.183254  150386 ssh_runner.go:195] Run: crio config
	I0916 10:53:25.220901  150386 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:53:25.220935  150386 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:53:25.220945  150386 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:53:25.220950  150386 command_runner.go:130] > #
	I0916 10:53:25.220958  150386 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:53:25.220966  150386 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:53:25.220975  150386 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:53:25.220986  150386 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:53:25.221000  150386 command_runner.go:130] > # reload'.
	I0916 10:53:25.221014  150386 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:53:25.221029  150386 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:53:25.221043  150386 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:53:25.221058  150386 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:53:25.221068  150386 command_runner.go:130] > [crio]
	I0916 10:53:25.221081  150386 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:53:25.221093  150386 command_runner.go:130] > # containers images, in this directory.
	I0916 10:53:25.221125  150386 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0916 10:53:25.221141  150386 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:53:25.221153  150386 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0916 10:53:25.221168  150386 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:53:25.221182  150386 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:53:25.221194  150386 command_runner.go:130] > # storage_driver = "vfs"
	I0916 10:53:25.221203  150386 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:53:25.221213  150386 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:53:25.221223  150386 command_runner.go:130] > # storage_option = [
	I0916 10:53:25.221230  150386 command_runner.go:130] > # ]
	I0916 10:53:25.221244  150386 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:53:25.221258  150386 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:53:25.221270  150386 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:53:25.221284  150386 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:53:25.221298  150386 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:53:25.221310  150386 command_runner.go:130] > # always happen on a node reboot
	I0916 10:53:25.221322  150386 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:53:25.221357  150386 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:53:25.221379  150386 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:53:25.221392  150386 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:53:25.221399  150386 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0916 10:53:25.221413  150386 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:53:25.221428  150386 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:53:25.221438  150386 command_runner.go:130] > # internal_wipe = true
	I0916 10:53:25.221448  150386 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:53:25.221461  150386 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:53:25.221477  150386 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:53:25.221494  150386 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:53:25.221505  150386 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:53:25.221511  150386 command_runner.go:130] > [crio.api]
	I0916 10:53:25.221520  150386 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:53:25.221532  150386 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:53:25.221545  150386 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:53:25.221554  150386 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:53:25.221569  150386 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:53:25.221586  150386 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:53:25.221598  150386 command_runner.go:130] > # stream_port = "0"
	I0916 10:53:25.221613  150386 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:53:25.221624  150386 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:53:25.221634  150386 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:53:25.221646  150386 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:53:25.221656  150386 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:53:25.221671  150386 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:53:25.221677  150386 command_runner.go:130] > # minutes.
	I0916 10:53:25.221685  150386 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:53:25.221699  150386 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:53:25.221712  150386 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:53:25.221721  150386 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:53:25.221730  150386 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:53:25.221741  150386 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:53:25.221751  150386 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:53:25.221761  150386 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:53:25.221774  150386 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:53:25.221786  150386 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0916 10:53:25.221801  150386 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:53:25.221810  150386 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0916 10:53:25.221838  150386 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:53:25.221853  150386 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:53:25.221859  150386 command_runner.go:130] > [crio.runtime]
	I0916 10:53:25.221872  150386 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:53:25.221884  150386 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:53:25.221891  150386 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:53:25.221902  150386 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:53:25.221912  150386 command_runner.go:130] > # default_ulimits = [
	I0916 10:53:25.221918  150386 command_runner.go:130] > # ]
	I0916 10:53:25.221932  150386 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:53:25.221940  150386 command_runner.go:130] > # no_pivot = false
	I0916 10:53:25.221952  150386 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:53:25.221964  150386 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:53:25.221976  150386 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:53:25.221986  150386 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:53:25.221994  150386 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:53:25.222008  150386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:53:25.222018  150386 command_runner.go:130] > # conmon = ""
	I0916 10:53:25.222025  150386 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:53:25.222044  150386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:53:25.222055  150386 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:53:25.222066  150386 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:53:25.222075  150386 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:53:25.222086  150386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:53:25.222098  150386 command_runner.go:130] > # conmon_env = [
	I0916 10:53:25.222104  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222116  150386 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:53:25.222123  150386 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:53:25.222132  150386 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:53:25.222138  150386 command_runner.go:130] > # default_env = [
	I0916 10:53:25.222143  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222158  150386 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:53:25.222165  150386 command_runner.go:130] > # selinux = false
	I0916 10:53:25.222177  150386 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:53:25.222190  150386 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:53:25.222201  150386 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:53:25.222213  150386 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:53:25.222226  150386 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:53:25.222238  150386 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:53:25.222250  150386 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:53:25.222261  150386 command_runner.go:130] > # which might increase security.
	I0916 10:53:25.222272  150386 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0916 10:53:25.222285  150386 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:53:25.222297  150386 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:53:25.222310  150386 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:53:25.222323  150386 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:53:25.222334  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.222346  150386 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:53:25.222358  150386 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:53:25.222368  150386 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:53:25.222378  150386 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:53:25.222388  150386 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:53:25.222398  150386 command_runner.go:130] > # irqbalance daemon.
	I0916 10:53:25.222409  150386 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:53:25.222422  150386 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:53:25.222433  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.222442  150386 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:53:25.222458  150386 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:53:25.222467  150386 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:53:25.222477  150386 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:53:25.222487  150386 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:53:25.222499  150386 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:53:25.222513  150386 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:53:25.222521  150386 command_runner.go:130] > # will be added.
	I0916 10:53:25.222532  150386 command_runner.go:130] > # default_capabilities = [
	I0916 10:53:25.222542  150386 command_runner.go:130] > # 	"CHOWN",
	I0916 10:53:25.222551  150386 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:53:25.222558  150386 command_runner.go:130] > # 	"FSETID",
	I0916 10:53:25.222567  150386 command_runner.go:130] > # 	"FOWNER",
	I0916 10:53:25.222606  150386 command_runner.go:130] > # 	"SETGID",
	I0916 10:53:25.222615  150386 command_runner.go:130] > # 	"SETUID",
	I0916 10:53:25.222624  150386 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:53:25.222632  150386 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:53:25.222640  150386 command_runner.go:130] > # 	"KILL",
	I0916 10:53:25.222649  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222661  150386 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:53:25.222675  150386 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:53:25.222686  150386 command_runner.go:130] > # add_inheritable_capabilities = true
	I0916 10:53:25.222698  150386 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:53:25.222711  150386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:53:25.222719  150386 command_runner.go:130] > default_sysctls = [
	I0916 10:53:25.222729  150386 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:53:25.222737  150386 command_runner.go:130] > ]
	I0916 10:53:25.222745  150386 command_runner.go:130] > # List of devices on the host that a
	I0916 10:53:25.222756  150386 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:53:25.222764  150386 command_runner.go:130] > # allowed_devices = [
	I0916 10:53:25.222772  150386 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:53:25.222778  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222785  150386 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:53:25.222819  150386 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:53:25.222829  150386 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:53:25.222840  150386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:53:25.222849  150386 command_runner.go:130] > # additional_devices = [
	I0916 10:53:25.222857  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222864  150386 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:53:25.222872  150386 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:53:25.222880  150386 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:53:25.222889  150386 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:53:25.222897  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222906  150386 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:53:25.222918  150386 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:53:25.222931  150386 command_runner.go:130] > # Defaults to false.
	I0916 10:53:25.222942  150386 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:53:25.222951  150386 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:53:25.222963  150386 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:53:25.222971  150386 command_runner.go:130] > # hooks_dir = [
	I0916 10:53:25.222981  150386 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:53:25.222988  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222997  150386 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:53:25.223009  150386 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:53:25.223019  150386 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:53:25.223026  150386 command_runner.go:130] > #
	I0916 10:53:25.223035  150386 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:53:25.223047  150386 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:53:25.223058  150386 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:53:25.223066  150386 command_runner.go:130] > #
	I0916 10:53:25.223078  150386 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:53:25.223090  150386 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:53:25.223103  150386 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:53:25.223113  150386 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:53:25.223118  150386 command_runner.go:130] > #
	I0916 10:53:25.223127  150386 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:53:25.223135  150386 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:53:25.223149  150386 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:53:25.223159  150386 command_runner.go:130] > # pids_limit = 0
	I0916 10:53:25.223172  150386 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:53:25.223184  150386 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:53:25.223196  150386 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:53:25.223211  150386 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:53:25.223221  150386 command_runner.go:130] > # log_size_max = -1
	I0916 10:53:25.223236  150386 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0916 10:53:25.223245  150386 command_runner.go:130] > # log_to_journald = false
	I0916 10:53:25.223258  150386 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:53:25.223268  150386 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:53:25.223280  150386 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:53:25.223291  150386 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:53:25.223303  150386 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:53:25.223312  150386 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:53:25.223322  150386 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:53:25.223334  150386 command_runner.go:130] > # read_only = false
	I0916 10:53:25.223346  150386 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:53:25.223359  150386 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:53:25.223368  150386 command_runner.go:130] > # live configuration reload.
	I0916 10:53:25.223374  150386 command_runner.go:130] > # log_level = "info"
	I0916 10:53:25.223384  150386 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:53:25.223395  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.223404  150386 command_runner.go:130] > # log_filter = ""
	I0916 10:53:25.223415  150386 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:53:25.223428  150386 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:53:25.223437  150386 command_runner.go:130] > # separated by comma.
	I0916 10:53:25.223447  150386 command_runner.go:130] > # uid_mappings = ""
	I0916 10:53:25.223458  150386 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:53:25.223470  150386 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:53:25.223479  150386 command_runner.go:130] > # separated by comma.
	I0916 10:53:25.223488  150386 command_runner.go:130] > # gid_mappings = ""
	I0916 10:53:25.223501  150386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:53:25.223513  150386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:53:25.223524  150386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:53:25.223534  150386 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:53:25.223545  150386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:53:25.223556  150386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:53:25.223568  150386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:53:25.223583  150386 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:53:25.223594  150386 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:53:25.223603  150386 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:53:25.223616  150386 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:53:25.223626  150386 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:53:25.223641  150386 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:53:25.223658  150386 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:53:25.223668  150386 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:53:25.223681  150386 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:53:25.223691  150386 command_runner.go:130] > # drop_infra_ctr = true
	I0916 10:53:25.223704  150386 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:53:25.223715  150386 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:53:25.223728  150386 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:53:25.223738  150386 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:53:25.223749  150386 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:53:25.223763  150386 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:53:25.223772  150386 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:53:25.223783  150386 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:53:25.223792  150386 command_runner.go:130] > # pinns_path = ""
	I0916 10:53:25.223803  150386 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:53:25.223815  150386 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0916 10:53:25.223827  150386 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0916 10:53:25.223834  150386 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:53:25.223839  150386 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:53:25.223849  150386 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:53:25.223861  150386 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0916 10:53:25.223868  150386 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:53:25.223875  150386 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:53:25.223882  150386 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:53:25.223887  150386 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:53:25.223893  150386 command_runner.go:130] > # ]
	I0916 10:53:25.223899  150386 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:53:25.223907  150386 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:53:25.223916  150386 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0916 10:53:25.223923  150386 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0916 10:53:25.223928  150386 command_runner.go:130] > #
	I0916 10:53:25.223933  150386 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0916 10:53:25.223940  150386 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0916 10:53:25.223944  150386 command_runner.go:130] > #  runtime_type = "oci"
	I0916 10:53:25.223949  150386 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0916 10:53:25.223957  150386 command_runner.go:130] > #  privileged_without_host_devices = false
	I0916 10:53:25.223961  150386 command_runner.go:130] > #  allowed_annotations = []
	I0916 10:53:25.223965  150386 command_runner.go:130] > # Where:
	I0916 10:53:25.223970  150386 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0916 10:53:25.223978  150386 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0916 10:53:25.223987  150386 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:53:25.223993  150386 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:53:25.223999  150386 command_runner.go:130] > #   in $PATH.
	I0916 10:53:25.224006  150386 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0916 10:53:25.224013  150386 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:53:25.224019  150386 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0916 10:53:25.224027  150386 command_runner.go:130] > #   state.
	I0916 10:53:25.224034  150386 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:53:25.224042  150386 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:53:25.224048  150386 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:53:25.224056  150386 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:53:25.224062  150386 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:53:25.224070  150386 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:53:25.224074  150386 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:53:25.224083  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:53:25.224092  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:53:25.224102  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:53:25.224110  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:53:25.224119  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:53:25.224128  150386 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:53:25.224134  150386 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:53:25.224142  150386 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0916 10:53:25.224147  150386 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:53:25.224154  150386 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:53:25.224158  150386 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0916 10:53:25.224164  150386 command_runner.go:130] > runtime_type = "oci"
	I0916 10:53:25.224169  150386 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:53:25.224175  150386 command_runner.go:130] > runtime_config_path = ""
	I0916 10:53:25.224179  150386 command_runner.go:130] > monitor_path = ""
	I0916 10:53:25.224185  150386 command_runner.go:130] > monitor_cgroup = ""
	I0916 10:53:25.224190  150386 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:53:25.224220  150386 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0916 10:53:25.224226  150386 command_runner.go:130] > # running containers
	I0916 10:53:25.224230  150386 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0916 10:53:25.224235  150386 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0916 10:53:25.224244  150386 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0916 10:53:25.224250  150386 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0916 10:53:25.224258  150386 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0916 10:53:25.224263  150386 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0916 10:53:25.224268  150386 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0916 10:53:25.224272  150386 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0916 10:53:25.224279  150386 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0916 10:53:25.224283  150386 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0916 10:53:25.224293  150386 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:53:25.224300  150386 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:53:25.224307  150386 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:53:25.224316  150386 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:53:25.224323  150386 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:53:25.224331  150386 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:53:25.224340  150386 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:53:25.224350  150386 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:53:25.224355  150386 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:53:25.224364  150386 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:53:25.224367  150386 command_runner.go:130] > # Example:
	I0916 10:53:25.224372  150386 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:53:25.224379  150386 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:53:25.224384  150386 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:53:25.224393  150386 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:53:25.224396  150386 command_runner.go:130] > # cpuset = 0
	I0916 10:53:25.224400  150386 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:53:25.224405  150386 command_runner.go:130] > # Where:
	I0916 10:53:25.224411  150386 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:53:25.224419  150386 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:53:25.224426  150386 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:53:25.224432  150386 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:53:25.224442  150386 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:53:25.224447  150386 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:53:25.224451  150386 command_runner.go:130] > # 
	I0916 10:53:25.224457  150386 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:53:25.224463  150386 command_runner.go:130] > #
	I0916 10:53:25.224469  150386 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:53:25.224477  150386 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:53:25.224483  150386 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:53:25.224491  150386 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:53:25.224497  150386 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:53:25.224504  150386 command_runner.go:130] > [crio.image]
	I0916 10:53:25.224510  150386 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:53:25.224518  150386 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:53:25.224524  150386 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:53:25.224532  150386 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:53:25.224536  150386 command_runner.go:130] > # global_auth_file = ""
	I0916 10:53:25.224543  150386 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:53:25.224548  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.224554  150386 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:53:25.224560  150386 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:53:25.224568  150386 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:53:25.224577  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.224584  150386 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:53:25.224589  150386 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:53:25.224597  150386 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:53:25.224603  150386 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:53:25.224611  150386 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:53:25.224615  150386 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:53:25.224623  150386 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:53:25.224631  150386 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:53:25.224640  150386 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:53:25.224648  150386 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:53:25.224653  150386 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:53:25.224658  150386 command_runner.go:130] > # signature_policy = ""
	I0916 10:53:25.224667  150386 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:53:25.224675  150386 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:53:25.224679  150386 command_runner.go:130] > # changing them here.
	I0916 10:53:25.224685  150386 command_runner.go:130] > # insecure_registries = [
	I0916 10:53:25.224689  150386 command_runner.go:130] > # ]
	I0916 10:53:25.224695  150386 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:53:25.224702  150386 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:53:25.224707  150386 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:53:25.224714  150386 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:53:25.224718  150386 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:53:25.224723  150386 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:53:25.224727  150386 command_runner.go:130] > # CNI plugins.
	I0916 10:53:25.224731  150386 command_runner.go:130] > [crio.network]
	I0916 10:53:25.224739  150386 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:53:25.224744  150386 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:53:25.224748  150386 command_runner.go:130] > # cni_default_network = ""
	I0916 10:53:25.224756  150386 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:53:25.224762  150386 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:53:25.224767  150386 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:53:25.224773  150386 command_runner.go:130] > # plugin_dirs = [
	I0916 10:53:25.224777  150386 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:53:25.224782  150386 command_runner.go:130] > # ]
	I0916 10:53:25.224788  150386 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:53:25.224791  150386 command_runner.go:130] > [crio.metrics]
	I0916 10:53:25.224796  150386 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:53:25.224801  150386 command_runner.go:130] > # enable_metrics = false
	I0916 10:53:25.224805  150386 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:53:25.224810  150386 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:53:25.224819  150386 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:53:25.224826  150386 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:53:25.224834  150386 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:53:25.224838  150386 command_runner.go:130] > # metrics_collectors = [
	I0916 10:53:25.224843  150386 command_runner.go:130] > # 	"operations",
	I0916 10:53:25.224847  150386 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:53:25.224852  150386 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:53:25.224858  150386 command_runner.go:130] > # 	"operations_errors",
	I0916 10:53:25.224862  150386 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:53:25.224867  150386 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:53:25.224871  150386 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:53:25.224875  150386 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:53:25.224879  150386 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:53:25.224883  150386 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:53:25.224887  150386 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:53:25.224891  150386 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:53:25.224895  150386 command_runner.go:130] > # 	"containers_oom",
	I0916 10:53:25.224901  150386 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:53:25.224905  150386 command_runner.go:130] > # 	"operations_total",
	I0916 10:53:25.224911  150386 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:53:25.224915  150386 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:53:25.224920  150386 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:53:25.224924  150386 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:53:25.224928  150386 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:53:25.224934  150386 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:53:25.224939  150386 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:53:25.224945  150386 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:53:25.224949  150386 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:53:25.224952  150386 command_runner.go:130] > # ]
	I0916 10:53:25.224957  150386 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:53:25.224961  150386 command_runner.go:130] > # metrics_port = 9090
	I0916 10:53:25.224966  150386 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:53:25.224970  150386 command_runner.go:130] > # metrics_socket = ""
	I0916 10:53:25.224977  150386 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:53:25.224985  150386 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:53:25.224993  150386 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:53:25.224997  150386 command_runner.go:130] > # certificate on any modification event.
	I0916 10:53:25.225002  150386 command_runner.go:130] > # metrics_cert = ""
	I0916 10:53:25.225007  150386 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:53:25.225015  150386 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:53:25.225018  150386 command_runner.go:130] > # metrics_key = ""
	I0916 10:53:25.225029  150386 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:53:25.225035  150386 command_runner.go:130] > [crio.tracing]
	I0916 10:53:25.225039  150386 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:53:25.225044  150386 command_runner.go:130] > # enable_tracing = false
	I0916 10:53:25.225048  150386 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:53:25.225052  150386 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:53:25.225056  150386 command_runner.go:130] > # Number of samples to collect per million spans.
	I0916 10:53:25.225060  150386 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:53:25.225066  150386 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:53:25.225069  150386 command_runner.go:130] > [crio.stats]
	I0916 10:53:25.225075  150386 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:53:25.225080  150386 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:53:25.225084  150386 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:53:25.225116  150386 command_runner.go:130] ! time="2024-09-16 10:53:25.218754149Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0916 10:53:25.225127  150386 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:53:25.225207  150386 cni.go:84] Creating CNI manager for ""
	I0916 10:53:25.225212  150386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:53:25.225220  150386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:53:25.225240  150386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-026168 NodeName:multinode-026168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:53:25.225402  150386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-026168"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:53:25.225480  150386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:53:25.233837  150386 command_runner.go:130] > kubeadm
	I0916 10:53:25.233861  150386 command_runner.go:130] > kubectl
	I0916 10:53:25.233867  150386 command_runner.go:130] > kubelet
	I0916 10:53:25.233893  150386 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:53:25.233945  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:53:25.241883  150386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0916 10:53:25.258665  150386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:53:25.275460  150386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:53:25.291931  150386 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:53:25.295165  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:53:25.305057  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:53:25.378997  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:53:25.391812  150386 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.2
	I0916 10:53:25.391836  150386 certs.go:194] generating shared ca certs ...
	I0916 10:53:25.391854  150386 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.392006  150386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:53:25.392059  150386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:53:25.392083  150386 certs.go:256] generating profile certs ...
	I0916 10:53:25.392154  150386 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key
	I0916 10:53:25.392179  150386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt with IP's: []
	I0916 10:53:25.481640  150386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt ...
	I0916 10:53:25.481678  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt: {Name:mk9bd3c2540afe41a9b495b48558c06f33cad4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.481875  150386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key ...
	I0916 10:53:25.481890  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key: {Name:mkc369c04f3bf5390d2f7aaeb26ec87bc68b4e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.482002  150386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66
	I0916 10:53:25.482030  150386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0916 10:53:25.775934  150386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66 ...
	I0916 10:53:25.775971  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66: {Name:mk3be0689653695bd78826696ae2b5515df82105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.776191  150386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66 ...
	I0916 10:53:25.776209  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66: {Name:mk742343203e36bcee65f9aa431aa427c1eb2e9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.776305  150386 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt
	I0916 10:53:25.776417  150386 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key
	I0916 10:53:25.776503  150386 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key
	I0916 10:53:25.776525  150386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt with IP's: []
	I0916 10:53:25.956310  150386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt ...
	I0916 10:53:25.956349  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt: {Name:mkda10595286654079142e1eff4429efbace9338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.956551  150386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key ...
	I0916 10:53:25.956576  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key: {Name:mkc963296c8321762a9d334c4bc71418f9425823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.956695  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:53:25.956719  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:53:25.956734  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:53:25.956750  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:53:25.956769  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:53:25.956789  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:53:25.956808  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:53:25.956826  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:53:25.956893  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:53:25.956939  150386 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:53:25.956952  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:53:25.956984  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:53:25.957018  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:53:25.957050  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:53:25.957106  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:53:25.957152  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:53:25.957174  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:25.957192  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:53:25.957794  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:53:25.981746  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:53:26.004628  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:53:26.027194  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:53:26.049678  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:53:26.072111  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:53:26.093871  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:53:26.116795  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:53:26.138967  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:53:26.161181  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:53:26.183991  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:53:26.207456  150386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:53:26.224158  150386 ssh_runner.go:195] Run: openssl version
	I0916 10:53:26.229088  150386 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:53:26.229252  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:53:26.237954  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.241388  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.241420  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.241469  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.248290  150386 command_runner.go:130] > 3ec20f2e
	I0916 10:53:26.248448  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:53:26.257725  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:53:26.266765  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.270336  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.270384  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.270438  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.277654  150386 command_runner.go:130] > b5213941
	I0916 10:53:26.277728  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:53:26.287565  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:53:26.297016  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.300770  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.300829  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.300872  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.307375  150386 command_runner.go:130] > 51391683
	I0916 10:53:26.307459  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:53:26.316661  150386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:53:26.320055  150386 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:53:26.320103  150386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:53:26.320144  150386 kubeadm.go:392] StartCluster: {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:26.320226  150386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:53:26.320275  150386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:53:26.355171  150386 cri.go:89] found id: ""
	I0916 10:53:26.355249  150386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:53:26.363356  150386 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0916 10:53:26.363387  150386 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0916 10:53:26.363396  150386 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0916 10:53:26.364086  150386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:53:26.372625  150386 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:53:26.372684  150386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:53:26.381125  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0916 10:53:26.381156  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0916 10:53:26.381169  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0916 10:53:26.381181  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:53:26.381221  150386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:53:26.381236  150386 kubeadm.go:157] found existing configuration files:
	
	I0916 10:53:26.381286  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:53:26.389970  150386 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:53:26.390026  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:53:26.390078  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:53:26.398312  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:53:26.406493  150386 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:53:26.406549  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:53:26.406610  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:53:26.414878  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:53:26.423137  150386 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:53:26.423193  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:53:26.423244  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:53:26.431078  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:53:26.439247  150386 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:53:26.439298  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:53:26.439345  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:53:26.447717  150386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:53:26.484433  150386 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:53:26.484477  150386 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I0916 10:53:26.484545  150386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:53:26.484555  150386 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:53:26.501068  150386 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:53:26.501100  150386 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:53:26.501168  150386 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:53:26.501193  150386 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:53:26.501262  150386 kubeadm.go:310] OS: Linux
	I0916 10:53:26.501274  150386 command_runner.go:130] > OS: Linux
	I0916 10:53:26.501374  150386 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:53:26.501395  150386 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:53:26.501456  150386 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:53:26.501467  150386 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:53:26.501527  150386 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:53:26.501537  150386 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:53:26.501630  150386 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:53:26.501642  150386 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:53:26.501719  150386 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:53:26.501737  150386 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:53:26.501817  150386 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:53:26.501829  150386 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:53:26.501881  150386 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:53:26.501894  150386 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:53:26.501965  150386 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:53:26.501981  150386 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:53:26.502049  150386 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:53:26.502060  150386 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:53:26.554639  150386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:53:26.554653  150386 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:53:26.554815  150386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:53:26.554832  150386 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:53:26.554962  150386 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:53:26.554974  150386 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:53:26.560763  150386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:53:26.560850  150386 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:53:26.563081  150386 out.go:235]   - Generating certificates and keys ...
	I0916 10:53:26.563189  150386 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0916 10:53:26.563204  150386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:53:26.563300  150386 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0916 10:53:26.563323  150386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:53:26.661612  150386 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:53:26.661640  150386 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:53:26.919823  150386 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:53:26.919861  150386 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:53:27.005190  150386 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:53:27.005221  150386 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0916 10:53:27.226400  150386 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:53:27.226457  150386 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0916 10:53:27.315950  150386 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:53:27.315981  150386 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0916 10:53:27.316132  150386 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.316150  150386 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.612384  150386 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:53:27.612414  150386 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0916 10:53:27.612550  150386 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.612565  150386 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.657432  150386 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:53:27.657466  150386 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:53:27.721218  150386 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:53:27.721247  150386 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:53:27.829857  150386 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:53:27.829877  150386 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0916 10:53:27.829978  150386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:53:27.829994  150386 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:53:27.901836  150386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:53:27.901863  150386 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:53:27.990782  150386 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:53:27.990806  150386 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:53:28.066565  150386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:53:28.066591  150386 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:53:28.286602  150386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:53:28.286635  150386 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:53:28.531261  150386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:53:28.531288  150386 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:53:28.532046  150386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:53:28.532067  150386 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:53:28.536520  150386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:53:28.536616  150386 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:53:28.539053  150386 out.go:235]   - Booting up control plane ...
	I0916 10:53:28.539193  150386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:53:28.539243  150386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:53:28.539365  150386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:53:28.539381  150386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:53:28.539976  150386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:53:28.539996  150386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:53:28.552676  150386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:53:28.552704  150386 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:53:28.558424  150386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:53:28.558455  150386 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:53:28.558497  150386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:53:28.558505  150386 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:53:28.640263  150386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:53:28.640300  150386 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:53:28.640435  150386 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:53:28.640447  150386 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:53:29.141777  150386 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.648489ms
	I0916 10:53:29.141809  150386 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.648489ms
	I0916 10:53:29.141898  150386 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:53:29.141922  150386 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:53:33.643743  150386 kubeadm.go:310] [api-check] The API server is healthy after 4.501974554s
	I0916 10:53:33.643773  150386 command_runner.go:130] > [api-check] The API server is healthy after 4.501974554s
	I0916 10:53:33.655458  150386 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:53:33.655490  150386 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:53:33.666692  150386 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:53:33.666702  150386 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:53:33.685168  150386 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:53:33.685197  150386 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:53:33.685391  150386 kubeadm.go:310] [mark-control-plane] Marking the node multinode-026168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:53:33.685401  150386 command_runner.go:130] > [mark-control-plane] Marking the node multinode-026168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:53:33.694893  150386 kubeadm.go:310] [bootstrap-token] Using token: t01fub.r49yz7owz29vmht5
	I0916 10:53:33.694919  150386 command_runner.go:130] > [bootstrap-token] Using token: t01fub.r49yz7owz29vmht5
	I0916 10:53:33.696702  150386 out.go:235]   - Configuring RBAC rules ...
	I0916 10:53:33.696831  150386 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:53:33.696848  150386 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:53:33.699750  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:53:33.699774  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:53:33.705469  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:53:33.705490  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:53:33.707965  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:53:33.707976  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:53:33.710360  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:53:33.710376  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:53:33.713861  150386 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:53:33.713878  150386 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:53:34.049692  150386 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:53:34.049712  150386 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:53:34.471244  150386 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:53:34.471270  150386 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0916 10:53:35.050638  150386 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:53:35.050661  150386 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0916 10:53:35.051407  150386 kubeadm.go:310] 
	I0916 10:53:35.051508  150386 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:53:35.051524  150386 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0916 10:53:35.051532  150386 kubeadm.go:310] 
	I0916 10:53:35.051671  150386 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:53:35.051683  150386 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0916 10:53:35.051689  150386 kubeadm.go:310] 
	I0916 10:53:35.051725  150386 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:53:35.051737  150386 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0916 10:53:35.051823  150386 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:53:35.051844  150386 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:53:35.051950  150386 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:53:35.051963  150386 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:53:35.051974  150386 kubeadm.go:310] 
	I0916 10:53:35.052068  150386 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:53:35.052083  150386 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0916 10:53:35.052090  150386 kubeadm.go:310] 
	I0916 10:53:35.052155  150386 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:53:35.052167  150386 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:53:35.052172  150386 kubeadm.go:310] 
	I0916 10:53:35.052241  150386 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:53:35.052252  150386 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0916 10:53:35.052358  150386 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:53:35.052368  150386 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:53:35.052472  150386 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:53:35.052477  150386 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:53:35.052487  150386 kubeadm.go:310] 
	I0916 10:53:35.052580  150386 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:53:35.052588  150386 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:53:35.052651  150386 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:53:35.052658  150386 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0916 10:53:35.052662  150386 kubeadm.go:310] 
	I0916 10:53:35.052761  150386 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.052769  150386 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.052863  150386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:53:35.052871  150386 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:53:35.052888  150386 kubeadm.go:310] 	--control-plane 
	I0916 10:53:35.052892  150386 command_runner.go:130] > 	--control-plane 
	I0916 10:53:35.052899  150386 kubeadm.go:310] 
	I0916 10:53:35.053025  150386 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:53:35.053049  150386 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:53:35.053056  150386 kubeadm.go:310] 
	I0916 10:53:35.053177  150386 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.053198  150386 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.053370  150386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:53:35.053384  150386 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:53:35.056111  150386 kubeadm.go:310] W0916 10:53:26.481933    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056135  150386 command_runner.go:130] ! W0916 10:53:26.481933    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056467  150386 kubeadm.go:310] W0916 10:53:26.482537    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056481  150386 command_runner.go:130] ! W0916 10:53:26.482537    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056823  150386 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:53:35.056851  150386 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:53:35.056961  150386 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:53:35.056988  150386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:53:35.057009  150386 cni.go:84] Creating CNI manager for ""
	I0916 10:53:35.057019  150386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:53:35.060028  150386 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:53:35.061344  150386 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:53:35.065587  150386 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0916 10:53:35.065616  150386 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0916 10:53:35.065627  150386 command_runner.go:130] > Device: 37h/55d	Inode: 544182      Links: 1
	I0916 10:53:35.065638  150386 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:53:35.065653  150386 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:53:35.065663  150386 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:53:35.065676  150386 command_runner.go:130] > Change: 2024-09-16 10:23:14.433787463 +0000
	I0916 10:53:35.065688  150386 command_runner.go:130] >  Birth: 2024-09-16 10:23:14.405785404 +0000
	I0916 10:53:35.065743  150386 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:53:35.065754  150386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:53:35.083563  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:53:35.259319  150386 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0916 10:53:35.264698  150386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0916 10:53:35.272503  150386 command_runner.go:130] > serviceaccount/kindnet created
	I0916 10:53:35.280912  150386 command_runner.go:130] > daemonset.apps/kindnet created
	I0916 10:53:35.284864  150386 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:53:35.284950  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:35.284980  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-026168 minikube.k8s.io/updated_at=2024_09_16T10_53_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-026168 minikube.k8s.io/primary=true
	I0916 10:53:35.291922  150386 command_runner.go:130] > -16
	I0916 10:53:35.291986  150386 ops.go:34] apiserver oom_adj: -16
	I0916 10:53:35.362511  150386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0916 10:53:35.362592  150386 command_runner.go:130] > node/multinode-026168 labeled
	I0916 10:53:35.362632  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:35.594017  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:35.863489  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:35.929347  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:36.363344  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:36.429937  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:36.863599  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:36.924251  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:37.363434  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:37.428045  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:37.863745  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:37.932230  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:38.362825  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:38.425127  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:38.863525  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:38.925423  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:39.362768  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:39.424290  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:39.863515  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:39.997203  150386 command_runner.go:130] > NAME      SECRETS   AGE
	I0916 10:53:39.997228  150386 command_runner.go:130] > default   0         0s
	I0916 10:53:40.000074  150386 kubeadm.go:1113] duration metric: took 4.715184212s to wait for elevateKubeSystemPrivileges
	I0916 10:53:40.000117  150386 kubeadm.go:394] duration metric: took 13.679975724s to StartCluster
	I0916 10:53:40.000141  150386 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:40.000222  150386 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.000897  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:40.001115  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:53:40.001134  150386 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:53:40.001191  150386 addons.go:69] Setting storage-provisioner=true in profile "multinode-026168"
	I0916 10:53:40.001113  150386 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:53:40.001210  150386 addons.go:234] Setting addon storage-provisioner=true in "multinode-026168"
	I0916 10:53:40.001230  150386 addons.go:69] Setting default-storageclass=true in profile "multinode-026168"
	I0916 10:53:40.001310  150386 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-026168"
	I0916 10:53:40.001359  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:53:40.001241  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:53:40.001708  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:40.001829  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:40.004500  150386 out.go:177] * Verifying Kubernetes components...
	I0916 10:53:40.006313  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:53:40.024797  150386 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:53:40.026334  150386 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:53:40.026353  150386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:53:40.026414  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:40.031105  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.031422  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:53:40.032571  150386 addons.go:234] Setting addon default-storageclass=true in "multinode-026168"
	I0916 10:53:40.032605  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:53:40.032970  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:40.033254  150386 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:53:40.044917  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:40.062746  150386 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:53:40.062768  150386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:53:40.062836  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:40.079883  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:40.122609  150386 command_runner.go:130] > apiVersion: v1
	I0916 10:53:40.122632  150386 command_runner.go:130] > data:
	I0916 10:53:40.122639  150386 command_runner.go:130] >   Corefile: |
	I0916 10:53:40.122645  150386 command_runner.go:130] >     .:53 {
	I0916 10:53:40.122652  150386 command_runner.go:130] >         errors
	I0916 10:53:40.122660  150386 command_runner.go:130] >         health {
	I0916 10:53:40.122668  150386 command_runner.go:130] >            lameduck 5s
	I0916 10:53:40.122675  150386 command_runner.go:130] >         }
	I0916 10:53:40.122681  150386 command_runner.go:130] >         ready
	I0916 10:53:40.122690  150386 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0916 10:53:40.122703  150386 command_runner.go:130] >            pods insecure
	I0916 10:53:40.122711  150386 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0916 10:53:40.122723  150386 command_runner.go:130] >            ttl 30
	I0916 10:53:40.122732  150386 command_runner.go:130] >         }
	I0916 10:53:40.122738  150386 command_runner.go:130] >         prometheus :9153
	I0916 10:53:40.122749  150386 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0916 10:53:40.122757  150386 command_runner.go:130] >            max_concurrent 1000
	I0916 10:53:40.122767  150386 command_runner.go:130] >         }
	I0916 10:53:40.122773  150386 command_runner.go:130] >         cache 30
	I0916 10:53:40.122780  150386 command_runner.go:130] >         loop
	I0916 10:53:40.122789  150386 command_runner.go:130] >         reload
	I0916 10:53:40.122796  150386 command_runner.go:130] >         loadbalance
	I0916 10:53:40.122810  150386 command_runner.go:130] >     }
	I0916 10:53:40.122819  150386 command_runner.go:130] > kind: ConfigMap
	I0916 10:53:40.122825  150386 command_runner.go:130] > metadata:
	I0916 10:53:40.122838  150386 command_runner.go:130] >   creationTimestamp: "2024-09-16T10:53:34Z"
	I0916 10:53:40.122847  150386 command_runner.go:130] >   name: coredns
	I0916 10:53:40.122855  150386 command_runner.go:130] >   namespace: kube-system
	I0916 10:53:40.122864  150386 command_runner.go:130] >   resourceVersion: "231"
	I0916 10:53:40.122872  150386 command_runner.go:130] >   uid: e998cc8c-5131-4a5d-a9a1-432e2b6af9db
	I0916 10:53:40.125952  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:53:40.210364  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:53:40.216115  150386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:53:40.315121  150386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:53:40.608310  150386 command_runner.go:130] > configmap/coredns replaced
	I0916 10:53:40.614257  150386 start.go:971] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0916 10:53:40.614797  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.615106  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:53:40.615426  150386 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:53:40.615439  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.615447  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.615451  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.615950  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.616190  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:53:40.616456  150386 node_ready.go:35] waiting up to 6m0s for node "multinode-026168" to be "Ready" ...
	I0916 10:53:40.616538  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:40.616546  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.616553  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.616558  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.626184  150386 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 10:53:40.626207  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.626216  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.626221  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.626227  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.626230  150386 round_trippers.go:580]     Audit-Id: f9a41f42-7443-4f80-a0c1-43f4f109f6c3
	I0916 10:53:40.626226  150386 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:53:40.626254  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.626270  150386 round_trippers.go:580]     Audit-Id: 0548045c-00aa-4805-9049-9c5199b72073
	I0916 10:53:40.626275  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.626282  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.626293  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.626299  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.626308  150386 round_trippers.go:580]     Content-Length: 291
	I0916 10:53:40.626314  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.626235  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.626342  150386 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"340","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:40.626349  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.626520  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:40.626896  150386 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"340","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:40.626962  150386 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:53:40.626978  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.626988  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.626995  150386 round_trippers.go:473]     Content-Type: application/json
	I0916 10:53:40.627006  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.632007  150386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:53:40.632024  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.632032  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.632035  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.632038  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.632041  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.632045  150386 round_trippers.go:580]     Content-Length: 291
	I0916 10:53:40.632047  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.632050  150386 round_trippers.go:580]     Audit-Id: 44165c95-2095-4714-b953-3c36a7e400d6
	I0916 10:53:40.632066  150386 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"354","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:40.859067  150386 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0916 10:53:40.864917  150386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0916 10:53:40.871412  150386 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:53:40.877843  150386 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:53:40.885623  150386 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0916 10:53:40.893583  150386 command_runner.go:130] > pod/storage-provisioner created
	I0916 10:53:40.898202  150386 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0916 10:53:40.898297  150386 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:53:40.898322  150386 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:53:40.898404  150386 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:53:40.898414  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.898424  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.898429  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.902756  150386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:53:40.902779  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.902786  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.902791  150386 round_trippers.go:580]     Audit-Id: 32fa806b-d148-4927-b934-aba6392098c5
	I0916 10:53:40.902795  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.902798  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.902801  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.902806  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.902809  150386 round_trippers.go:580]     Content-Length: 1273
	I0916 10:53:40.902890  150386 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"373"},"items":[{"metadata":{"name":"standard","uid":"36c62ec6-ddea-48a1-9dc2-2da1904ffa1f","resourceVersion":"353","creationTimestamp":"2024-09-16T10:53:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:53:40.903237  150386 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"36c62ec6-ddea-48a1-9dc2-2da1904ffa1f","resourceVersion":"353","creationTimestamp":"2024-09-16T10:53:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:53:40.903283  150386 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:53:40.903292  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.903301  150386 round_trippers.go:473]     Content-Type: application/json
	I0916 10:53:40.903306  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.903308  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.906003  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:40.906026  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.906036  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.906041  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.906047  150386 round_trippers.go:580]     Content-Length: 1220
	I0916 10:53:40.906051  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.906056  150386 round_trippers.go:580]     Audit-Id: 4fd6db17-21e5-4aec-8b6d-0ef0ff14fb81
	I0916 10:53:40.906062  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.906066  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.906097  150386 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"36c62ec6-ddea-48a1-9dc2-2da1904ffa1f","resourceVersion":"353","creationTimestamp":"2024-09-16T10:53:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:53:40.908589  150386 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:53:40.910350  150386 addons.go:510] duration metric: took 909.208755ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:53:41.116452  150386 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:53:41.116477  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:41.116485  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:41.116489  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:41.116640  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:41.116672  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:41.116684  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:41.116691  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:41.118874  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:41.118908  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:41.118920  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:41.118928  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:41.118933  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:41.118938  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:41.118945  150386 round_trippers.go:580]     Content-Length: 291
	I0916 10:53:41.118951  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:41 GMT
	I0916 10:53:41.118956  150386 round_trippers.go:580]     Audit-Id: 14700eff-8e81-414a-96de-3277b23c7acc
	I0916 10:53:41.118956  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:41.119028  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:41.119040  150386 round_trippers.go:580]     Audit-Id: fd4e6ea9-e690-4e72-a149-c9b8ee79d7fd
	I0916 10:53:41.119045  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:41.119049  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:41.119052  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:41.119056  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:41.119061  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:41 GMT
	I0916 10:53:41.118992  150386 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"365","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:41.119190  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:41.119243  150386 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-026168" context rescaled to 1 replicas
	I0916 10:53:41.617105  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:41.617134  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:41.617142  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:41.617147  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:41.619550  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:41.619576  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:41.619585  150386 round_trippers.go:580]     Audit-Id: eba217dc-cf5e-453c-8d97-7d7bebdba7f2
	I0916 10:53:41.619589  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:41.619594  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:41.619598  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:41.619603  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:41.619609  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:41 GMT
	I0916 10:53:41.619784  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:42.117504  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:42.117530  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:42.117540  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:42.117543  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:42.119752  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:42.119775  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:42.119784  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:42.119788  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:42.119793  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:42.119799  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:42 GMT
	I0916 10:53:42.119803  150386 round_trippers.go:580]     Audit-Id: 1b6bc607-25a8-4eb2-94c2-669ae72227f6
	I0916 10:53:42.119807  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:42.119919  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:42.617283  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:42.617309  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:42.617318  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:42.617323  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:42.619627  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:42.619650  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:42.619657  150386 round_trippers.go:580]     Audit-Id: 1a21f591-014b-4e8a-a374-83658b7ace7a
	I0916 10:53:42.619665  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:42.619669  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:42.619672  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:42.619676  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:42.619680  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:42 GMT
	I0916 10:53:42.619783  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:42.620092  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:43.117401  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:43.117426  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:43.117437  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:43.117443  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:43.119565  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:43.119588  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:43.119597  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:43.119604  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:43.119608  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:43.119612  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:43 GMT
	I0916 10:53:43.119619  150386 round_trippers.go:580]     Audit-Id: e4cfefcf-91cd-441a-833b-d12723eb585e
	I0916 10:53:43.119623  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:43.119734  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:43.616944  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:43.616969  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:43.616976  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:43.616980  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:43.619154  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:43.619180  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:43.619190  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:43.619194  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:43.619198  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:43.619201  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:43.619203  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:43 GMT
	I0916 10:53:43.619206  150386 round_trippers.go:580]     Audit-Id: cc3120c2-4368-4109-b162-4462ae59da8e
	I0916 10:53:43.619409  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:44.117059  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:44.117088  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:44.117097  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:44.117100  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:44.119343  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:44.119369  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:44.119376  150386 round_trippers.go:580]     Audit-Id: 8c110a90-6a12-4fec-8811-94c426b77d70
	I0916 10:53:44.119379  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:44.119383  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:44.119386  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:44.119389  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:44.119394  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:44 GMT
	I0916 10:53:44.119508  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:44.616663  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:44.616688  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:44.616696  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:44.616701  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:44.618973  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:44.618995  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:44.619001  150386 round_trippers.go:580]     Audit-Id: 770b78cf-805b-43fb-8530-7c33082ba3bb
	I0916 10:53:44.619005  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:44.619008  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:44.619011  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:44.619014  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:44.619016  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:44 GMT
	I0916 10:53:44.619117  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:45.116767  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:45.116792  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:45.116800  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:45.116805  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:45.119314  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:45.119342  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:45.119350  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:45.119355  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:45 GMT
	I0916 10:53:45.119358  150386 round_trippers.go:580]     Audit-Id: 0ee4bef0-a16c-4708-a7a4-dfadfc5ccb46
	I0916 10:53:45.119361  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:45.119363  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:45.119369  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:45.119484  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:45.119999  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:45.617022  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:45.617046  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:45.617055  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:45.617059  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:45.619402  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:45.619422  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:45.619429  150386 round_trippers.go:580]     Audit-Id: cdced059-83aa-47fc-8f6f-45fe88339dea
	I0916 10:53:45.619432  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:45.619436  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:45.619441  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:45.619446  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:45.619450  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:45 GMT
	I0916 10:53:45.619591  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:46.117228  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:46.117251  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:46.117259  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:46.117262  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:46.119638  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:46.119659  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:46.119669  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:46.119674  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:46.119680  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:46.119684  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:46.119689  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:46 GMT
	I0916 10:53:46.119694  150386 round_trippers.go:580]     Audit-Id: f19037a1-764f-4e76-b3ec-4d94d9087b98
	I0916 10:53:46.119830  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:46.617095  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:46.617118  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:46.617126  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:46.617130  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:46.619352  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:46.619371  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:46.619378  150386 round_trippers.go:580]     Audit-Id: 12a0932c-dd20-4f12-8a40-8500af01b0aa
	I0916 10:53:46.619382  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:46.619384  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:46.619387  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:46.619390  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:46.619393  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:46 GMT
	I0916 10:53:46.619548  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:47.117142  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:47.117169  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:47.117177  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:47.117182  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:47.119467  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:47.119492  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:47.119502  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:47.119508  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:47.119513  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:47 GMT
	I0916 10:53:47.119518  150386 round_trippers.go:580]     Audit-Id: 4529e82f-203e-4e00-857e-1e2d1684de05
	I0916 10:53:47.119522  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:47.119526  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:47.119679  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:47.617369  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:47.617397  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:47.617405  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:47.617409  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:47.619634  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:47.619657  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:47.619666  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:47.619671  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:47 GMT
	I0916 10:53:47.619680  150386 round_trippers.go:580]     Audit-Id: 12feab5b-afae-4eb3-ad25-2a966d6200dc
	I0916 10:53:47.619685  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:47.619693  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:47.619696  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:47.619809  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:47.620109  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:48.117541  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:48.117570  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:48.117578  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:48.117583  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:48.119926  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:48.119951  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:48.119957  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:48.119963  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:48.119969  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:48.119973  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:48.119981  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:48 GMT
	I0916 10:53:48.119984  150386 round_trippers.go:580]     Audit-Id: 12def08e-855e-4272-8e5f-682d77355528
	I0916 10:53:48.120102  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:48.616746  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:48.616771  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:48.616778  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:48.616782  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:48.619202  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:48.619225  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:48.619234  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:48.619242  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:48.619247  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:48.619251  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:48.619254  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:48 GMT
	I0916 10:53:48.619258  150386 round_trippers.go:580]     Audit-Id: 517665d8-72ec-4dd0-926c-36104c9d5963
	I0916 10:53:48.619401  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:49.116961  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:49.116996  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:49.117004  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:49.117007  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:49.119412  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:49.119441  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:49.119451  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:49 GMT
	I0916 10:53:49.119456  150386 round_trippers.go:580]     Audit-Id: 9ae35bce-2097-4129-a372-362680001968
	I0916 10:53:49.119460  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:49.119469  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:49.119472  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:49.119478  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:49.119670  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:49.617424  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:49.617456  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:49.617468  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:49.617472  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:49.619722  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:49.619740  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:49.619746  150386 round_trippers.go:580]     Audit-Id: bfe53f2a-cab2-4d3c-834a-90af3ebd269d
	I0916 10:53:49.619751  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:49.619753  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:49.619756  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:49.619762  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:49.619766  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:49 GMT
	I0916 10:53:49.619927  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:49.620260  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:50.116728  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:50.116756  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:50.116764  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:50.116768  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:50.119178  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:50.119208  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:50.119218  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:50.119226  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:50.119233  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:50.119239  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:50 GMT
	I0916 10:53:50.119244  150386 round_trippers.go:580]     Audit-Id: d39ddcf8-996b-4e75-a653-a984a88a4d95
	I0916 10:53:50.119249  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:50.119352  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:50.616946  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:50.616971  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:50.616979  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:50.616984  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:50.619019  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:50.619037  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:50.619043  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:50.619047  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:50 GMT
	I0916 10:53:50.619049  150386 round_trippers.go:580]     Audit-Id: 885d5982-7134-4f63-9e57-3353514e2aa0
	I0916 10:53:50.619052  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:50.619054  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:50.619057  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:50.619248  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:51.117517  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:51.117548  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:51.117559  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:51.117565  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:51.119940  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:51.119960  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:51.119967  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:51.119970  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:51 GMT
	I0916 10:53:51.119973  150386 round_trippers.go:580]     Audit-Id: 590ad475-ebeb-4883-886f-00302fe65d3d
	I0916 10:53:51.119976  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:51.119979  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:51.119981  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:51.120171  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:51.616972  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:51.616999  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:51.617008  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:51.617013  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:51.619550  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:51.619571  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:51.619577  150386 round_trippers.go:580]     Audit-Id: 03cf0642-860b-42c3-b1d2-7006ea64714a
	I0916 10:53:51.619580  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:51.619584  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:51.619588  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:51.619593  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:51.619596  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:51 GMT
	I0916 10:53:51.619837  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:52.117501  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:52.117525  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:52.117533  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:52.117537  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:52.119864  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:52.119894  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:52.119904  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:52.119910  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:52.119915  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:52 GMT
	I0916 10:53:52.119920  150386 round_trippers.go:580]     Audit-Id: 938bb85d-2831-446e-a44d-bcbbcee136b0
	I0916 10:53:52.119923  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:52.119927  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:52.120088  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:52.120477  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:52.616688  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:52.616709  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:52.616716  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:52.616721  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:52.618998  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:52.619018  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:52.619025  150386 round_trippers.go:580]     Audit-Id: 23435aab-675e-477e-8d42-3a068f46a079
	I0916 10:53:52.619030  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:52.619033  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:52.619036  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:52.619038  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:52.619041  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:52 GMT
	I0916 10:53:52.619183  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:53.116723  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:53.116750  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:53.116758  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:53.116764  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:53.119088  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:53.119121  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:53.119132  150386 round_trippers.go:580]     Audit-Id: 63da5e7f-f2e8-4bb9-8a07-5fe007ff0a5b
	I0916 10:53:53.119139  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:53.119146  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:53.119157  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:53.119165  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:53.119171  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:53 GMT
	I0916 10:53:53.119306  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:53.616823  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:53.616849  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:53.616856  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:53.616859  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:53.619150  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:53.619169  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:53.619176  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:53.619179  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:53.619183  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:53 GMT
	I0916 10:53:53.619187  150386 round_trippers.go:580]     Audit-Id: e386e022-3a4a-4bbb-8b4c-43c8483579e9
	I0916 10:53:53.619190  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:53.619195  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:53.619321  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:54.116895  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:54.116921  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:54.116930  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:54.116935  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:54.119182  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:54.119201  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:54.119208  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:54.119211  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:54.119216  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:54.119219  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:54.119221  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:54 GMT
	I0916 10:53:54.119224  150386 round_trippers.go:580]     Audit-Id: 5aa37b20-b37d-46b8-8b98-82b05a31ce2e
	I0916 10:53:54.119388  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:54.617067  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:54.617101  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:54.617113  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:54.617118  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:54.619234  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:54.619254  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:54.619260  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:54.619264  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:54.619267  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:54 GMT
	I0916 10:53:54.619270  150386 round_trippers.go:580]     Audit-Id: 1c19a77a-39bd-4b3a-baaa-5ac3dd293d29
	I0916 10:53:54.619272  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:54.619275  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:54.619457  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:54.619843  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:55.117081  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:55.117106  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:55.117115  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:55.117119  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:55.119379  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:55.119404  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:55.119413  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:55.119419  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:55.119424  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:55 GMT
	I0916 10:53:55.119430  150386 round_trippers.go:580]     Audit-Id: d2ee37de-b302-4acc-8337-5b6537438e81
	I0916 10:53:55.119436  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:55.119442  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:55.119597  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:55.617047  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:55.617072  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:55.617090  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:55.617094  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:55.619435  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:55.619457  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:55.619465  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:55 GMT
	I0916 10:53:55.619470  150386 round_trippers.go:580]     Audit-Id: b95077a7-0a3f-4670-aff3-54d3926db2ae
	I0916 10:53:55.619474  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:55.619477  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:55.619481  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:55.619485  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:55.619585  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:56.116747  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:56.116773  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:56.116780  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:56.116784  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:56.119036  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:56.119057  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:56.119064  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:56.119069  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:56.119073  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:56 GMT
	I0916 10:53:56.119079  150386 round_trippers.go:580]     Audit-Id: b8184018-3755-40d8-b48a-5cc359d5313b
	I0916 10:53:56.119084  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:56.119087  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:56.119187  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:56.617245  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:56.617270  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:56.617278  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:56.617283  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:56.619756  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:56.619780  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:56.619788  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:56.619792  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:56.619796  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:56 GMT
	I0916 10:53:56.619801  150386 round_trippers.go:580]     Audit-Id: 23e8f8ed-1381-4a83-b8cc-121d8428adc8
	I0916 10:53:56.619806  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:56.619809  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:56.619984  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:56.620353  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:57.117692  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:57.117715  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:57.117724  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:57.117728  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:57.120019  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:57.120043  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:57.120052  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:57.120058  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:57.120063  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:57 GMT
	I0916 10:53:57.120067  150386 round_trippers.go:580]     Audit-Id: 8cf78437-1de9-4a85-9b9f-30670f0a7dc5
	I0916 10:53:57.120071  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:57.120074  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:57.120352  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:57.616926  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:57.616960  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:57.616970  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:57.616976  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:57.619372  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:57.619396  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:57.619404  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:57.619409  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:57.619413  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:57 GMT
	I0916 10:53:57.619417  150386 round_trippers.go:580]     Audit-Id: 1f1e36b3-1c7c-4544-8a9c-eb512aa82b6c
	I0916 10:53:57.619421  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:57.619426  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:57.619562  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:58.117247  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:58.117279  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:58.117290  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:58.117294  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:58.119568  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:58.119593  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:58.119603  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:58.119609  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:58 GMT
	I0916 10:53:58.119614  150386 round_trippers.go:580]     Audit-Id: 4fbcc7f0-7d08-411b-b127-0b0b663a6729
	I0916 10:53:58.119620  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:58.119624  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:58.119630  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:58.119788  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:58.617485  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:58.617510  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:58.617518  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:58.617523  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:58.619577  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:58.619600  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:58.619615  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:58 GMT
	I0916 10:53:58.619620  150386 round_trippers.go:580]     Audit-Id: 8244889b-a63e-4c50-b675-1ad681e4d690
	I0916 10:53:58.619624  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:58.619629  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:58.619634  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:58.619638  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:58.619803  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:59.117490  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:59.117514  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:59.117522  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:59.117525  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:59.119739  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:59.119760  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:59.119774  150386 round_trippers.go:580]     Audit-Id: 8291e611-cc2b-4443-a010-cb47dcfe3392
	I0916 10:53:59.119781  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:59.119786  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:59.119792  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:59.119797  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:59.119804  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:59 GMT
	I0916 10:53:59.119931  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:59.120229  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:59.617693  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:59.617714  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:59.617722  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:59.617725  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:59.619896  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:59.619914  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:59.619920  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:59.619924  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:59 GMT
	I0916 10:53:59.619927  150386 round_trippers.go:580]     Audit-Id: dfdf1042-375b-4e8a-bb7c-a2fa683ba77c
	I0916 10:53:59.619930  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:59.619932  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:59.619938  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:59.620080  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:00.116886  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:00.116916  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:00.116923  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:00.116932  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:00.119220  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:00.119238  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:00.119244  150386 round_trippers.go:580]     Audit-Id: 392b271a-95ed-4b56-ba55-057c71956cd4
	I0916 10:54:00.119248  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:00.119253  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:00.119257  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:00.119260  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:00.119264  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:00 GMT
	I0916 10:54:00.119387  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:00.617035  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:00.617060  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:00.617068  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:00.617072  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:00.619477  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:00.619507  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:00.619515  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:00.619520  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:00 GMT
	I0916 10:54:00.619525  150386 round_trippers.go:580]     Audit-Id: 9e2085c0-1961-4471-882c-48f50115b637
	I0916 10:54:00.619529  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:00.619535  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:00.619538  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:00.619745  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:01.117322  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:01.117358  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:01.117373  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:01.117379  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:01.119427  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:01.119450  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:01.119459  150386 round_trippers.go:580]     Audit-Id: d7282b5e-c8f0-476d-86bc-4d9ba3a6b0cc
	I0916 10:54:01.119463  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:01.119469  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:01.119474  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:01.119480  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:01.119485  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:01 GMT
	I0916 10:54:01.119610  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:01.617460  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:01.617491  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:01.617503  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:01.617509  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:01.620032  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:01.620061  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:01.620069  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:01.620076  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:01.620081  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:01.620085  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:01 GMT
	I0916 10:54:01.620090  150386 round_trippers.go:580]     Audit-Id: 7a6cbb8a-2690-4d9e-92ea-c5e9a72def47
	I0916 10:54:01.620094  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:01.620257  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:01.620558  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:02.116862  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:02.116889  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:02.116896  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:02.116903  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:02.119134  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:02.119153  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:02.119160  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:02.119167  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:02.119172  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:02.119176  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:02.119179  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:02 GMT
	I0916 10:54:02.119183  150386 round_trippers.go:580]     Audit-Id: 9b822245-ec73-4fa0-b7af-40f5bc4b2882
	I0916 10:54:02.119309  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:02.616880  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:02.616910  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:02.616919  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:02.616923  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:02.619117  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:02.619143  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:02.619153  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:02 GMT
	I0916 10:54:02.619158  150386 round_trippers.go:580]     Audit-Id: 9f94f6b8-d35a-467d-987c-81b16e427b7f
	I0916 10:54:02.619164  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:02.619171  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:02.619175  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:02.619180  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:02.619330  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:03.116891  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:03.116916  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:03.116923  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:03.116928  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:03.119204  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:03.119225  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:03.119231  150386 round_trippers.go:580]     Audit-Id: bae5e557-fc03-45d9-8a9d-d7867d46a500
	I0916 10:54:03.119239  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:03.119242  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:03.119244  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:03.119247  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:03.119249  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:03 GMT
	I0916 10:54:03.119351  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:03.616993  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:03.617025  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:03.617037  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:03.617043  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:03.619304  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:03.619327  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:03.619335  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:03.619339  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:03 GMT
	I0916 10:54:03.619342  150386 round_trippers.go:580]     Audit-Id: c0d34464-36ee-4376-a44c-8ee9c00b9017
	I0916 10:54:03.619345  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:03.619349  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:03.619351  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:03.619525  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:04.117212  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:04.117236  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:04.117244  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:04.117249  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:04.119600  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:04.119622  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:04.119633  150386 round_trippers.go:580]     Audit-Id: 45f293fa-f5b6-47e8-8490-79743ff5bc1a
	I0916 10:54:04.119636  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:04.119639  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:04.119641  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:04.119644  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:04.119646  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:04 GMT
	I0916 10:54:04.119837  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:04.120173  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:04.617468  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:04.617498  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:04.617506  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:04.617511  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:04.619521  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:04.619541  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:04.619548  150386 round_trippers.go:580]     Audit-Id: 2be5c4a2-e870-4bd5-abe3-83dd951a3b03
	I0916 10:54:04.619552  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:04.619557  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:04.619561  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:04.619564  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:04.619568  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:04 GMT
	I0916 10:54:04.619743  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:05.117458  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:05.117484  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:05.117492  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:05.117499  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:05.119659  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:05.119679  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:05.119686  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:05.119691  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:05.119695  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:05.119700  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:05 GMT
	I0916 10:54:05.119704  150386 round_trippers.go:580]     Audit-Id: 5e9214a7-0771-44ed-93a6-e554e9ddd410
	I0916 10:54:05.119708  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:05.119863  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:05.617548  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:05.617570  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:05.617577  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:05.617583  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:05.619759  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:05.619779  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:05.619788  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:05.619793  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:05.619796  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:05 GMT
	I0916 10:54:05.619799  150386 round_trippers.go:580]     Audit-Id: 8fe4f9c6-fa42-490c-9251-a4a9920d93b4
	I0916 10:54:05.619802  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:05.619805  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:05.619942  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:06.117627  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:06.117649  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:06.117658  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:06.117662  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:06.120388  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:06.120475  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:06.120497  150386 round_trippers.go:580]     Audit-Id: 9f49b3da-fef4-44b7-821c-1883547fa9a4
	I0916 10:54:06.120506  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:06.120527  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:06.120536  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:06.120540  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:06.120544  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:06 GMT
	I0916 10:54:06.120712  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:06.121171  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:06.617566  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:06.617590  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:06.617599  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:06.617604  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:06.619738  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:06.619757  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:06.619764  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:06.619767  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:06.619770  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:06.619774  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:06 GMT
	I0916 10:54:06.619776  150386 round_trippers.go:580]     Audit-Id: 268af30a-044b-4099-8c2d-81b72a2d5b84
	I0916 10:54:06.619779  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:06.619974  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:07.116654  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:07.116683  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:07.116692  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:07.116701  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:07.118906  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:07.118930  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:07.118940  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:07.118945  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:07.118951  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:07.118956  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:07.118961  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:07 GMT
	I0916 10:54:07.118965  150386 round_trippers.go:580]     Audit-Id: ae8d4514-f532-4b42-a139-617e17330272
	I0916 10:54:07.119105  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:07.616736  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:07.616762  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:07.616769  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:07.616774  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:07.619001  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:07.619022  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:07.619035  150386 round_trippers.go:580]     Audit-Id: 461d9fe1-f9b0-409f-b6b8-e0b29c479f23
	I0916 10:54:07.619040  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:07.619045  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:07.619048  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:07.619052  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:07.619057  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:07 GMT
	I0916 10:54:07.619217  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:08.116824  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:08.116850  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:08.116861  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:08.116868  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:08.119256  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:08.119285  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:08.119293  150386 round_trippers.go:580]     Audit-Id: 8d37ef0f-05ee-4f42-9fe1-80db8abf8df6
	I0916 10:54:08.119297  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:08.119300  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:08.119305  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:08.119308  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:08.119314  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:08 GMT
	I0916 10:54:08.119432  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:08.616964  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:08.617006  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:08.617016  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:08.617021  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:08.619206  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:08.619228  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:08.619237  150386 round_trippers.go:580]     Audit-Id: 338c18e5-bd4c-4adc-9e46-f52f0f9fe471
	I0916 10:54:08.619241  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:08.619246  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:08.619249  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:08.619253  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:08.619257  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:08 GMT
	I0916 10:54:08.619386  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:08.619714  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:09.116747  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:09.116771  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:09.116781  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:09.116787  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:09.119002  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:09.119022  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:09.119032  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:09.119037  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:09.119042  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:09.119047  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:09 GMT
	I0916 10:54:09.119051  150386 round_trippers.go:580]     Audit-Id: 944d5c8a-8d97-45f0-bf41-0cf7b23809c5
	I0916 10:54:09.119055  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:09.119173  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:09.616735  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:09.616761  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:09.616768  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:09.616772  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:09.619149  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:09.619175  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:09.619185  150386 round_trippers.go:580]     Audit-Id: 78ea15c6-485c-40cf-8958-1045974f90a8
	I0916 10:54:09.619189  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:09.619195  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:09.619198  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:09.619201  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:09.619204  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:09 GMT
	I0916 10:54:09.619327  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:10.117118  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:10.117140  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:10.117148  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:10.117152  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:10.119364  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:10.119392  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:10.119401  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:10.119407  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:10.119412  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:10 GMT
	I0916 10:54:10.119417  150386 round_trippers.go:580]     Audit-Id: dacb2d06-957b-4648-a200-d9676d52fc79
	I0916 10:54:10.119421  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:10.119424  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:10.119573  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:10.617257  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:10.617282  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:10.617290  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:10.617293  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:10.619517  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:10.619544  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:10.619553  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:10 GMT
	I0916 10:54:10.619558  150386 round_trippers.go:580]     Audit-Id: 78644a3f-da34-4061-8e55-763b6523fb12
	I0916 10:54:10.619591  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:10.619596  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:10.619601  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:10.619608  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:10.619785  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:10.620129  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:11.117544  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:11.117568  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:11.117598  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:11.117602  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:11.119835  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:11.119860  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:11.119868  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:11.119874  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:11 GMT
	I0916 10:54:11.119878  150386 round_trippers.go:580]     Audit-Id: a2e73f77-df1f-4e91-a413-7145e3790143
	I0916 10:54:11.119881  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:11.119886  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:11.119890  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:11.120068  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:11.616910  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:11.616932  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:11.616940  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:11.616944  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:11.619107  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:11.619129  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:11.619134  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:11.619139  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:11.619142  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:11 GMT
	I0916 10:54:11.619146  150386 round_trippers.go:580]     Audit-Id: d65d6da5-b57a-4909-8e30-5651b7705c5a
	I0916 10:54:11.619149  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:11.619155  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:11.619340  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:12.117004  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:12.117032  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:12.117040  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:12.117045  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:12.119461  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:12.119488  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:12.119499  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:12.119507  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:12 GMT
	I0916 10:54:12.119521  150386 round_trippers.go:580]     Audit-Id: f4b344ea-5d12-4855-bba7-702aeaddfd9c
	I0916 10:54:12.119527  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:12.119531  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:12.119535  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:12.119666  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:12.617119  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:12.617148  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:12.617158  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:12.617164  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:12.619328  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:12.619355  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:12.619363  150386 round_trippers.go:580]     Audit-Id: b3eb2aa8-bb22-4047-9839-d00e1f1ba713
	I0916 10:54:12.619367  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:12.619371  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:12.619375  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:12.619378  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:12.619384  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:12 GMT
	I0916 10:54:12.619560  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:13.117154  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:13.117181  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:13.117189  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:13.117194  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:13.119604  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:13.119630  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:13.119639  150386 round_trippers.go:580]     Audit-Id: c8f5cf60-c646-4679-801f-7ae2e5c3ba6d
	I0916 10:54:13.119644  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:13.119648  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:13.119651  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:13.119655  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:13.119659  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:13 GMT
	I0916 10:54:13.119839  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:13.120162  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:13.617539  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:13.617563  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:13.617573  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:13.617580  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:13.619962  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:13.619991  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:13.620001  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:13.620005  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:13 GMT
	I0916 10:54:13.620011  150386 round_trippers.go:580]     Audit-Id: 5c01f644-0736-4cb5-a8d4-13945e0fbf51
	I0916 10:54:13.620015  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:13.620021  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:13.620026  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:13.620197  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:14.116867  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:14.116908  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:14.116916  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:14.116919  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:14.119258  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:14.119285  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:14.119295  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:14.119303  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:14.119307  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:14.119311  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:14.119316  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:14 GMT
	I0916 10:54:14.119321  150386 round_trippers.go:580]     Audit-Id: ddf15265-f19a-4f14-9e84-803422b4fa29
	I0916 10:54:14.119425  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:14.616889  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:14.616914  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:14.616924  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:14.616930  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:14.618983  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:14.619009  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:14.619020  150386 round_trippers.go:580]     Audit-Id: 452ee33c-3ea0-42b2-b0bf-c04ce7660c10
	I0916 10:54:14.619024  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:14.619029  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:14.619033  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:14.619047  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:14.619054  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:14 GMT
	I0916 10:54:14.619170  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:15.116750  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:15.116776  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:15.116784  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:15.116788  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:15.119339  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:15.119366  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:15.119374  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:15 GMT
	I0916 10:54:15.119379  150386 round_trippers.go:580]     Audit-Id: dd1f1cc7-dc3d-4945-bae8-fa83ff662f3d
	I0916 10:54:15.119382  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:15.119385  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:15.119389  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:15.119393  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:15.119568  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:15.617310  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:15.617354  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:15.617362  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:15.617364  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:15.619707  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:15.619731  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:15.619740  150386 round_trippers.go:580]     Audit-Id: 02db9117-edca-4966-b561-7342514e4175
	I0916 10:54:15.619747  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:15.619750  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:15.619754  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:15.619758  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:15.619762  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:15 GMT
	I0916 10:54:15.619950  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:15.620279  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:16.117647  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:16.117670  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:16.117677  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:16.117682  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:16.120054  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:16.120076  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:16.120086  150386 round_trippers.go:580]     Audit-Id: 931dfbfa-accc-4152-bdb1-53ab7f374af9
	I0916 10:54:16.120097  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:16.120103  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:16.120107  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:16.120111  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:16.120115  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:16 GMT
	I0916 10:54:16.120226  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:16.617542  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:16.617564  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:16.617572  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:16.617576  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:16.619723  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:16.619744  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:16.619751  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:16.619756  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:16.619759  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:16.619762  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:16.619765  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:16 GMT
	I0916 10:54:16.619768  150386 round_trippers.go:580]     Audit-Id: ef38a36a-4590-42fc-8ed5-00d2b11d84c8
	I0916 10:54:16.619904  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:17.117559  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:17.117582  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:17.117589  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:17.117592  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:17.120089  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:17.120114  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:17.120121  150386 round_trippers.go:580]     Audit-Id: 0d7a5e53-75bf-48b4-abdc-156b2590e690
	I0916 10:54:17.120126  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:17.120129  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:17.120133  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:17.120137  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:17.120141  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:17 GMT
	I0916 10:54:17.120237  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:17.616741  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:17.616765  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:17.616773  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:17.616779  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:17.618939  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:17.618966  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:17.618978  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:17.618984  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:17.618990  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:17.618996  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:17.619006  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:17 GMT
	I0916 10:54:17.619015  150386 round_trippers.go:580]     Audit-Id: 12a35e31-01f4-4ec4-b7aa-0de50d15a224
	I0916 10:54:17.619199  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:18.116842  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:18.116869  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:18.116879  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:18.116885  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:18.119481  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:18.119503  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:18.119515  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:18.119521  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:18.119525  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:18.119529  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:18.119533  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:18 GMT
	I0916 10:54:18.119537  150386 round_trippers.go:580]     Audit-Id: b949a021-55e1-4612-a1b0-de9148805d85
	I0916 10:54:18.119700  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:18.120094  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:18.617319  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:18.617356  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:18.617364  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:18.617370  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:18.619648  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:18.619666  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:18.619672  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:18.619675  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:18.619680  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:18.619684  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:18.619687  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:18 GMT
	I0916 10:54:18.619689  150386 round_trippers.go:580]     Audit-Id: 3b62b805-566e-4c20-b23a-9bdb959ccbcd
	I0916 10:54:18.619882  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:19.117620  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:19.117648  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:19.117658  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:19.117663  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:19.119862  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:19.119885  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:19.119894  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:19.119898  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:19.119903  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:19 GMT
	I0916 10:54:19.119907  150386 round_trippers.go:580]     Audit-Id: 56da5455-d24c-4e1a-b8be-a418fdfd2f46
	I0916 10:54:19.119910  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:19.119913  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:19.120066  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:19.617693  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:19.617722  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:19.617733  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:19.617739  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:19.619865  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:19.619887  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:19.619896  150386 round_trippers.go:580]     Audit-Id: 401e4086-8cd7-4de9-96c7-cb5c47c7cc12
	I0916 10:54:19.619902  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:19.619907  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:19.619912  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:19.619915  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:19.619919  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:19 GMT
	I0916 10:54:19.620041  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:20.116892  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:20.116916  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:20.116922  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:20.116926  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:20.119182  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:20.119212  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:20.119219  150386 round_trippers.go:580]     Audit-Id: e8971d30-03fb-4857-95d8-51fe0dcd83f2
	I0916 10:54:20.119225  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:20.119232  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:20.119234  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:20.119239  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:20.119243  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:20 GMT
	I0916 10:54:20.119408  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:20.617078  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:20.617106  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:20.617118  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:20.617125  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:20.619372  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:20.619396  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:20.619409  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:20 GMT
	I0916 10:54:20.619416  150386 round_trippers.go:580]     Audit-Id: e07f39e0-3d8b-4184-8dba-16dcd388a3e4
	I0916 10:54:20.619422  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:20.619428  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:20.619433  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:20.619442  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:20.619576  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:20.619898  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:21.117050  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.117075  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.117085  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.117089  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.119269  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.119291  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.119300  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.119307  150386 round_trippers.go:580]     Audit-Id: d94eb3b5-c407-4838-b258-f4c49214f94c
	I0916 10:54:21.119312  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.119315  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.119319  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.119324  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.119448  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.119866  150386 node_ready.go:49] node "multinode-026168" has status "Ready":"True"
	I0916 10:54:21.119886  150386 node_ready.go:38] duration metric: took 40.50340662s for node "multinode-026168" to be "Ready" ...
	I0916 10:54:21.119897  150386 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:21.119993  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:21.120006  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.120016  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.120023  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.122357  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.122379  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.122386  150386 round_trippers.go:580]     Audit-Id: 31eac586-78c2-4c69-b2e4-b36bdb0db681
	I0916 10:54:21.122395  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.122398  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.122401  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.122404  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.122408  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.122909  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"402","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59368 chars]
	I0916 10:54:21.127452  150386 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.127527  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:54:21.127533  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.127540  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.127545  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.129578  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.129597  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.129604  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.129610  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.129614  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.129618  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.129621  150386 round_trippers.go:580]     Audit-Id: 0695cdc4-bb80-4878-b510-951311f1c0c9
	I0916 10:54:21.129625  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.129747  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"402","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6701 chars]
	I0916 10:54:21.130168  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.130184  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.130194  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.130201  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.131803  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.131818  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.131824  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.131828  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.131831  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.131833  150386 round_trippers.go:580]     Audit-Id: c250efec-29fd-47db-be33-fca840c0d49b
	I0916 10:54:21.131836  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.131839  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.132180  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.628097  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:54:21.628128  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.628140  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.628145  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.630394  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.630421  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.630430  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.630436  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.630441  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.630446  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.630451  150386 round_trippers.go:580]     Audit-Id: 76206d57-1511-4733-a20a-f7846b30d399
	I0916 10:54:21.630455  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.630613  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:54:21.631075  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.631090  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.631099  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.631102  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.632849  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.632865  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.632871  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.632877  150386 round_trippers.go:580]     Audit-Id: ca24b158-aca0-4520-b5f1-66851865e9e1
	I0916 10:54:21.632881  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.632884  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.632893  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.632896  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.633021  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.633359  150386 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.633378  150386 pod_ready.go:82] duration metric: took 505.900424ms for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.633391  150386 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.633464  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:54:21.633474  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.633484  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.633496  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.635047  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.635060  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.635065  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.635069  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.635073  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.635076  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.635079  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.635083  150386 round_trippers.go:580]     Audit-Id: e25dddc3-2899-4dcf-b6c3-c2ebbf017b4a
	I0916 10:54:21.635202  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"382","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6435 chars]
	I0916 10:54:21.635522  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.635532  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.635539  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.635543  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.637033  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.637049  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.637056  150386 round_trippers.go:580]     Audit-Id: a402ba77-612a-4a78-9161-1f9af7dc14dc
	I0916 10:54:21.637059  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.637064  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.637067  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.637075  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.637082  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.637196  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.637568  150386 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.637585  150386 pod_ready.go:82] duration metric: took 4.183061ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.637602  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.637667  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:54:21.637678  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.637687  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.637694  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.639190  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.639200  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.639205  150386 round_trippers.go:580]     Audit-Id: 8d2d738a-85a4-4c4d-af29-f7632eaaf8fe
	I0916 10:54:21.639210  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.639215  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.639219  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.639223  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.639227  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.639415  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"384","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8513 chars]
	I0916 10:54:21.639783  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.639794  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.639801  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.639804  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.641136  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.641148  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.641154  150386 round_trippers.go:580]     Audit-Id: df3deefa-caff-4811-9e39-a5d826b48e18
	I0916 10:54:21.641157  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.641160  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.641164  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.641166  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.641169  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.641327  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.641616  150386 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.641632  150386 pod_ready.go:82] duration metric: took 4.0197ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.641643  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.641697  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:54:21.641707  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.641718  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.641724  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.643065  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.643082  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.643090  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.643097  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.643103  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.643107  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.643111  150386 round_trippers.go:580]     Audit-Id: fd45b8c3-1639-4c9c-9a3c-d1b60ed060af
	I0916 10:54:21.643119  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.643237  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"380","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8088 chars]
	I0916 10:54:21.643686  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.643701  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.643711  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.643717  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.644942  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.644955  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.644961  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.644964  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.644967  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.644970  150386 round_trippers.go:580]     Audit-Id: 796969a1-6899-441e-96f7-1ef8fe8ae578
	I0916 10:54:21.644973  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.644976  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.645122  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.645468  150386 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.645484  150386 pod_ready.go:82] duration metric: took 3.833778ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.645496  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.717891  150386 request.go:632] Waited for 72.3345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:21.717991  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:21.718003  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.718010  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.718015  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.720260  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.720288  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.720295  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.720299  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.720303  150386 round_trippers.go:580]     Audit-Id: 53ae8947-274b-4459-9e3c-cbaf6f154315
	I0916 10:54:21.720307  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.720312  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.720316  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.720465  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"348","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:54:21.917174  150386 request.go:632] Waited for 196.227739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.917256  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.917262  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.917269  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.917274  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.919941  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.919981  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.919991  150386 round_trippers.go:580]     Audit-Id: bd7a902b-75f5-47b6-a673-4bc31c4a42be
	I0916 10:54:21.919997  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.920001  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.920005  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.920009  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.920014  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.920129  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.920479  150386 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.920497  150386 pod_ready.go:82] duration metric: took 274.994935ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.920507  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:22.117992  150386 request.go:632] Waited for 197.422651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:22.118062  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:22.118066  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.118074  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.118079  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.120308  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:22.120328  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.120334  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.120340  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.120346  150386 round_trippers.go:580]     Audit-Id: 12d10cd1-f471-40a4-b04b-552d91f6b9ab
	I0916 10:54:22.120350  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.120353  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.120357  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.120521  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"377","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4970 chars]
	I0916 10:54:22.318078  150386 request.go:632] Waited for 197.115028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:22.318145  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:22.318152  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.318159  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.318165  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.320651  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:22.320674  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.320681  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.320684  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.320687  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.320691  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.320694  150386 round_trippers.go:580]     Audit-Id: 70e0a499-d469-4a3f-8d56-398e020a712a
	I0916 10:54:22.320697  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.320887  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:22.321271  150386 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:22.321289  150386 pod_ready.go:82] duration metric: took 400.776828ms for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:22.321302  150386 pod_ready.go:39] duration metric: took 1.201386489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:22.321330  150386 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:54:22.321414  150386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:54:22.332357  150386 command_runner.go:130] > 1502
	I0916 10:54:22.332397  150386 api_server.go:72] duration metric: took 42.33117523s to wait for apiserver process to appear ...
	I0916 10:54:22.332407  150386 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:54:22.332431  150386 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:54:22.336925  150386 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:54:22.336986  150386 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 10:54:22.336991  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.336998  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.337002  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.337746  150386 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:54:22.337771  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.337781  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.337787  150386 round_trippers.go:580]     Audit-Id: 3d9f177d-85ed-463f-96f1-b9da4dd8452c
	I0916 10:54:22.337792  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.337798  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.337804  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.337810  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.337820  150386 round_trippers.go:580]     Content-Length: 263
	I0916 10:54:22.337841  150386 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:54:22.337950  150386 api_server.go:141] control plane version: v1.31.1
	I0916 10:54:22.337970  150386 api_server.go:131] duration metric: took 5.557199ms to wait for apiserver health ...
	I0916 10:54:22.337977  150386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:54:22.517192  150386 request.go:632] Waited for 179.154193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.517257  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.517262  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.517268  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.517273  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.520573  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:22.520600  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.520612  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.520619  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.520625  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.520629  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.520633  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.520636  150386 round_trippers.go:580]     Audit-Id: 31c363b0-1712-451f-81f2-cf95c81f3f77
	I0916 10:54:22.521223  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59444 chars]
	I0916 10:54:22.524175  150386 system_pods.go:59] 8 kube-system pods found
	I0916 10:54:22.524211  150386 system_pods.go:61] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running
	I0916 10:54:22.524217  150386 system_pods.go:61] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running
	I0916 10:54:22.524221  150386 system_pods.go:61] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:54:22.524225  150386 system_pods.go:61] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running
	I0916 10:54:22.524234  150386 system_pods.go:61] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running
	I0916 10:54:22.524239  150386 system_pods.go:61] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:54:22.524244  150386 system_pods.go:61] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:54:22.524250  150386 system_pods.go:61] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:54:22.524257  150386 system_pods.go:74] duration metric: took 186.274611ms to wait for pod list to return data ...
	I0916 10:54:22.524270  150386 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:54:22.717753  150386 request.go:632] Waited for 193.393723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:54:22.717852  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:54:22.717863  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.717874  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.717882  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.721139  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:22.721169  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.721177  150386 round_trippers.go:580]     Content-Length: 261
	I0916 10:54:22.721183  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.721187  150386 round_trippers.go:580]     Audit-Id: 2d2d0765-fe8f-4a12-ae5f-a890fee1ee4b
	I0916 10:54:22.721191  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.721196  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.721200  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.721204  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.721233  150386 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3f54840f-e917-4b73-aac8-060ce8f211be","resourceVersion":"325","creationTimestamp":"2024-09-16T10:53:39Z"}}]}
	I0916 10:54:22.721473  150386 default_sa.go:45] found service account: "default"
	I0916 10:54:22.721494  150386 default_sa.go:55] duration metric: took 197.218223ms for default service account to be created ...
	I0916 10:54:22.721507  150386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:54:22.917603  150386 request.go:632] Waited for 196.008334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.917692  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.917700  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.917710  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.917722  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.920897  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:22.920919  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.920926  150386 round_trippers.go:580]     Audit-Id: 59cb84a7-961b-4c43-b13a-5cdcd0ab7320
	I0916 10:54:22.920930  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.920933  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.920937  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.920940  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.920943  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.921535  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59444 chars]
	I0916 10:54:22.923403  150386 system_pods.go:86] 8 kube-system pods found
	I0916 10:54:22.923430  150386 system_pods.go:89] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running
	I0916 10:54:22.923435  150386 system_pods.go:89] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running
	I0916 10:54:22.923439  150386 system_pods.go:89] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:54:22.923442  150386 system_pods.go:89] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running
	I0916 10:54:22.923446  150386 system_pods.go:89] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running
	I0916 10:54:22.923451  150386 system_pods.go:89] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:54:22.923455  150386 system_pods.go:89] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:54:22.923458  150386 system_pods.go:89] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:54:22.923463  150386 system_pods.go:126] duration metric: took 201.948979ms to wait for k8s-apps to be running ...
	I0916 10:54:22.923470  150386 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:54:22.923512  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:54:22.935482  150386 system_svc.go:56] duration metric: took 12.003954ms WaitForService to wait for kubelet
	I0916 10:54:22.935510  150386 kubeadm.go:582] duration metric: took 42.934287833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:22.935531  150386 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:54:23.117992  150386 request.go:632] Waited for 182.386401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:23.118099  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:23.118109  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:23.118120  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:23.118130  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:23.121007  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:23.121033  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:23.121043  150386 round_trippers.go:580]     Audit-Id: 13b6d3ea-0fca-4ca7-8081-ec0a3e9b8e01
	I0916 10:54:23.121051  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:23.121055  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:23.121059  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:23.121063  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:23.121067  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:23 GMT
	I0916 10:54:23.121274  150386 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0916 10:54:23.121686  150386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:54:23.121712  150386 node_conditions.go:123] node cpu capacity is 8
	I0916 10:54:23.121726  150386 node_conditions.go:105] duration metric: took 186.188965ms to run NodePressure ...
	I0916 10:54:23.121741  150386 start.go:241] waiting for startup goroutines ...
	I0916 10:54:23.121753  150386 start.go:246] waiting for cluster config update ...
	I0916 10:54:23.121771  150386 start.go:255] writing updated cluster config ...
	I0916 10:54:23.124160  150386 out.go:201] 
	I0916 10:54:23.125798  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:23.125924  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:54:23.127806  150386 out.go:177] * Starting "multinode-026168-m02" worker node in "multinode-026168" cluster
	I0916 10:54:23.129676  150386 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:54:23.131281  150386 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:54:23.132722  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:54:23.132755  150386 cache.go:56] Caching tarball of preloaded images
	I0916 10:54:23.132834  150386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:54:23.132867  150386 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:54:23.132883  150386 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:54:23.132994  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	W0916 10:54:23.153756  150386 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:54:23.153779  150386 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:54:23.153875  150386 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:54:23.153894  150386 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:54:23.153900  150386 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:54:23.153920  150386 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:54:23.153928  150386 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:54:23.155051  150386 image.go:273] response: 
	I0916 10:54:23.212231  150386 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:54:23.212268  150386 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:54:23.212308  150386 start.go:360] acquireMachinesLock for multinode-026168-m02: {Name:mk244ea9c32e56587b67dd9c9f2d4f0dcccd26e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:54:23.212428  150386 start.go:364] duration metric: took 97.765µs to acquireMachinesLock for "multinode-026168-m02"
	I0916 10:54:23.212460  150386 start.go:93] Provisioning new machine with config: &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:54:23.212535  150386 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:54:23.214703  150386 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:54:23.214819  150386 start.go:159] libmachine.API.Create for "multinode-026168" (driver="docker")
	I0916 10:54:23.214849  150386 client.go:168] LocalClient.Create starting
	I0916 10:54:23.214929  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:54:23.214972  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:23.214987  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:23.215035  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:54:23.215053  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:23.215063  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:23.215253  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:54:23.231940  150386 network_create.go:77] Found existing network {name:multinode-026168 subnet:0xc002012150 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0916 10:54:23.231978  150386 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-026168-m02" container
	I0916 10:54:23.232031  150386 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:54:23.247936  150386 cli_runner.go:164] Run: docker volume create multinode-026168-m02 --label name.minikube.sigs.k8s.io=multinode-026168-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:54:23.265752  150386 oci.go:103] Successfully created a docker volume multinode-026168-m02
	I0916 10:54:23.265835  150386 cli_runner.go:164] Run: docker run --rm --name multinode-026168-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168-m02 --entrypoint /usr/bin/test -v multinode-026168-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:54:23.761053  150386 oci.go:107] Successfully prepared a docker volume multinode-026168-m02
	I0916 10:54:23.761096  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:54:23.761121  150386 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:54:23.761183  150386 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:54:28.208705  150386 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.447479357s)
	I0916 10:54:28.208743  150386 kic.go:203] duration metric: took 4.447620046s to extract preloaded images to volume ...
	W0916 10:54:28.208853  150386 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:54:28.208937  150386 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:54:28.258744  150386 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-026168-m02 --name multinode-026168-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-026168-m02 --network multinode-026168 --ip 192.168.67.3 --volume multinode-026168-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:54:28.552494  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Running}}
	I0916 10:54:28.570713  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:54:28.589273  150386 cli_runner.go:164] Run: docker exec multinode-026168-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:54:28.632228  150386 oci.go:144] the created container "multinode-026168-m02" has a running status.
	I0916 10:54:28.632263  150386 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa...
	I0916 10:54:28.724402  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:54:28.724451  150386 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:54:28.745185  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:54:28.762081  150386 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:54:28.762103  150386 kic_runner.go:114] Args: [docker exec --privileged multinode-026168-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:54:28.807858  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:54:28.824342  150386 machine.go:93] provisionDockerMachine start ...
	I0916 10:54:28.824429  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:28.843239  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:28.843559  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:28.843585  150386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:54:28.844383  150386 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51938->127.0.0.1:32908: read: connection reset by peer
	I0916 10:54:31.976892  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:54:31.976922  150386 ubuntu.go:169] provisioning hostname "multinode-026168-m02"
	I0916 10:54:31.976973  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:31.994091  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:31.994288  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:31.994304  150386 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168-m02 && echo "multinode-026168-m02" | sudo tee /etc/hostname
	I0916 10:54:32.140171  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:54:32.140251  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.157277  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:32.157465  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:32.157485  150386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:54:32.289554  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:54:32.289591  150386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:54:32.289616  150386 ubuntu.go:177] setting up certificates
	I0916 10:54:32.289631  150386 provision.go:84] configureAuth start
	I0916 10:54:32.289700  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:54:32.306551  150386 provision.go:143] copyHostCerts
	I0916 10:54:32.306588  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:54:32.306618  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:54:32.306624  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:54:32.306708  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:54:32.306801  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:54:32.306828  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:54:32.306837  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:54:32.306876  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:54:32.306945  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:54:32.306970  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:54:32.306980  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:54:32.307014  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:54:32.307135  150386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-026168-m02]
	I0916 10:54:32.488245  150386 provision.go:177] copyRemoteCerts
	I0916 10:54:32.488298  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:54:32.488335  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.506446  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:32.602051  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:54:32.602141  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:54:32.623639  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:54:32.623701  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:54:32.646080  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:54:32.646141  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:54:32.668553  150386 provision.go:87] duration metric: took 378.909929ms to configureAuth
	I0916 10:54:32.668581  150386 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:54:32.668762  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:32.668869  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.687689  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:32.687890  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:32.687908  150386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:54:32.911387  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:54:32.911413  150386 machine.go:96] duration metric: took 4.087048728s to provisionDockerMachine
	I0916 10:54:32.911423  150386 client.go:171] duration metric: took 9.696565035s to LocalClient.Create
	I0916 10:54:32.911442  150386 start.go:167] duration metric: took 9.696623047s to libmachine.API.Create "multinode-026168"
	I0916 10:54:32.911451  150386 start.go:293] postStartSetup for "multinode-026168-m02" (driver="docker")
	I0916 10:54:32.911464  150386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:54:32.911527  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:54:32.911563  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.929049  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.030331  150386 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:54:33.033229  150386 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:54:33.033271  150386 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:54:33.033283  150386 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:54:33.033292  150386 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:54:33.033301  150386 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:54:33.033307  150386 command_runner.go:130] > ID=ubuntu
	I0916 10:54:33.033313  150386 command_runner.go:130] > ID_LIKE=debian
	I0916 10:54:33.033323  150386 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:54:33.033328  150386 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:54:33.033362  150386 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:54:33.033376  150386 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:54:33.033385  150386 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:54:33.033452  150386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:54:33.033475  150386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:54:33.033482  150386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:54:33.033488  150386 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:54:33.033498  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:54:33.033548  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:54:33.033614  150386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:54:33.033622  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:54:33.033715  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:54:33.041732  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:54:33.063842  150386 start.go:296] duration metric: took 152.375443ms for postStartSetup
	I0916 10:54:33.064206  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:54:33.081271  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:54:33.081670  150386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:54:33.081714  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:33.099427  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.190562  150386 command_runner.go:130] > 30%
	I0916 10:54:33.190640  150386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:54:33.194859  150386 command_runner.go:130] > 204G
	I0916 10:54:33.195150  150386 start.go:128] duration metric: took 9.982603136s to createHost
	I0916 10:54:33.195175  150386 start.go:83] releasing machines lock for "multinode-026168-m02", held for 9.982732368s
	I0916 10:54:33.195248  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:54:33.214796  150386 out.go:177] * Found network options:
	I0916 10:54:33.216317  150386 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 10:54:33.217848  150386 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:54:33.217906  150386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:54:33.218001  150386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:54:33.218053  150386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:54:33.218061  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:33.218103  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:33.236009  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.236423  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.405768  150386 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:54:33.464179  150386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:54:33.468338  150386 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:54:33.468368  150386 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:54:33.468378  150386 command_runner.go:130] > Device: b7h/183d	Inode: 535096      Links: 1
	I0916 10:54:33.468384  150386 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:54:33.468390  150386 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:54:33.468395  150386 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:54:33.468399  150386 command_runner.go:130] > Change: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:54:33.468416  150386 command_runner.go:130] >  Birth: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:54:33.468693  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:54:33.486323  150386 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:54:33.486417  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:54:33.513648  150386 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:54:33.513703  150386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:54:33.513713  150386 start.go:495] detecting cgroup driver to use...
	I0916 10:54:33.513749  150386 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:54:33.513797  150386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:54:33.528251  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:54:33.540275  150386 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:54:33.540343  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:54:33.552913  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:54:33.566361  150386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:54:33.639899  150386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:54:33.731263  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:54:33.731311  150386 docker.go:233] disabling docker service ...
	I0916 10:54:33.731365  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:54:33.749417  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:54:33.760326  150386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:54:33.843879  150386 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:54:33.843949  150386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:54:33.930022  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:54:33.930110  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:54:33.940911  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:54:33.956121  150386 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:54:33.956165  150386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:54:33.956211  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.966074  150386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:54:33.966138  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.975297  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.984512  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.993945  150386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:54:34.002689  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:34.012279  150386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:34.026984  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:34.036614  150386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:54:34.043858  150386 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:54:34.044465  150386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:54:34.052424  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:54:34.131587  150386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:54:34.245486  150386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:54:34.245562  150386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:54:34.248995  150386 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:54:34.249028  150386 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:54:34.249038  150386 command_runner.go:130] > Device: c0h/192d	Inode: 186         Links: 1
	I0916 10:54:34.249045  150386 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:54:34.249050  150386 command_runner.go:130] > Access: 2024-09-16 10:54:34.232046114 +0000
	I0916 10:54:34.249056  150386 command_runner.go:130] > Modify: 2024-09-16 10:54:34.232046114 +0000
	I0916 10:54:34.249061  150386 command_runner.go:130] > Change: 2024-09-16 10:54:34.232046114 +0000
	I0916 10:54:34.249065  150386 command_runner.go:130] >  Birth: -
	I0916 10:54:34.249111  150386 start.go:563] Will wait 60s for crictl version
	I0916 10:54:34.249160  150386 ssh_runner.go:195] Run: which crictl
	I0916 10:54:34.252370  150386 command_runner.go:130] > /usr/bin/crictl
	I0916 10:54:34.252469  150386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:54:34.284451  150386 command_runner.go:130] > Version:  0.1.0
	I0916 10:54:34.284476  150386 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:54:34.284480  150386 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:54:34.284486  150386 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:54:34.286613  150386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:54:34.286695  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:54:34.319283  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:54:34.319304  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:54:34.319313  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:54:34.319320  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:54:34.319329  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:54:34.319337  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:54:34.319343  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:54:34.319351  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:54:34.319357  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:54:34.319365  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:54:34.319369  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:54:34.319373  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:54:34.321161  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:54:34.354614  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:54:34.354644  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:54:34.354656  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:54:34.354664  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:54:34.354672  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:54:34.354679  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:54:34.354686  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:54:34.354694  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:54:34.354702  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:54:34.354716  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:54:34.354722  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:54:34.354729  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:54:34.356900  150386 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:54:34.358515  150386 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:54:34.359941  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:54:34.377238  150386 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:54:34.380850  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:54:34.390936  150386 mustload.go:65] Loading cluster: multinode-026168
	I0916 10:54:34.391127  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:34.391324  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:54:34.410822  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:54:34.411143  150386 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.3
	I0916 10:54:34.411160  150386 certs.go:194] generating shared ca certs ...
	I0916 10:54:34.411182  150386 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:54:34.411329  150386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:54:34.411392  150386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:54:34.411411  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:54:34.411433  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:54:34.411454  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:54:34.411477  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:54:34.411547  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:54:34.411599  150386 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:54:34.411613  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:54:34.411653  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:54:34.411690  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:54:34.411725  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:54:34.411788  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:54:34.411828  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.411848  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.411867  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.411895  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:54:34.435909  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:54:34.458727  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:54:34.481625  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:54:34.502802  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:54:34.525129  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:54:34.547503  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:54:34.570192  150386 ssh_runner.go:195] Run: openssl version
	I0916 10:54:34.575429  150386 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:54:34.575514  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:54:34.584455  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.587759  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.587789  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.587825  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.593965  150386 command_runner.go:130] > 51391683
	I0916 10:54:34.594155  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:54:34.602965  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:54:34.611628  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.615051  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.615113  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.615162  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.621281  150386 command_runner.go:130] > 3ec20f2e
	I0916 10:54:34.621469  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:54:34.630305  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:54:34.639257  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.642542  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.642573  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.642618  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.648922  150386 command_runner.go:130] > b5213941
	I0916 10:54:34.648987  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:54:34.657747  150386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:54:34.660935  150386 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:54:34.660982  150386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:54:34.661027  150386 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 crio false true} ...
	I0916 10:54:34.661126  150386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:54:34.661292  150386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:54:34.669451  150386 command_runner.go:130] > kubeadm
	I0916 10:54:34.669476  150386 command_runner.go:130] > kubectl
	I0916 10:54:34.669482  150386 command_runner.go:130] > kubelet
	I0916 10:54:34.669508  150386 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:54:34.669558  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:54:34.677633  150386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I0916 10:54:34.694198  150386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:54:34.710629  150386 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:54:34.714201  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:54:34.724359  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:54:34.799551  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:54:34.812199  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:54:34.812442  150386 start.go:317] joinCluster: &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:54:34.812523  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:54:34.812562  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:54:34.831158  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:54:34.972349  150386 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token u9veb8.vmzv8qzigtxm2pxd --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:54:34.977238  150386 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:54:34.977276  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u9veb8.vmzv8qzigtxm2pxd --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-026168-m02"
	I0916 10:54:35.018804  150386 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:54:35.072593  150386 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:54:36.225928  150386 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:54:36.225960  150386 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:54:36.225972  150386 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:54:36.225980  150386 command_runner.go:130] > OS: Linux
	I0916 10:54:36.225988  150386 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:54:36.226001  150386 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:54:36.226011  150386 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:54:36.226021  150386 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:54:36.226031  150386 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:54:36.226043  150386 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:54:36.226058  150386 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:54:36.226069  150386 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:54:36.226080  150386 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:54:36.226091  150386 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0916 10:54:36.226103  150386 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0916 10:54:36.226123  150386 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:54:36.226138  150386 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:54:36.226149  150386 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:54:36.226170  150386 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:54:36.226182  150386 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001502695s
	I0916 10:54:36.226194  150386 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0916 10:54:36.226203  150386 command_runner.go:130] > This node has joined the cluster:
	I0916 10:54:36.226212  150386 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0916 10:54:36.226224  150386 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0916 10:54:36.226238  150386 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0916 10:54:36.226265  150386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u9veb8.vmzv8qzigtxm2pxd --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-026168-m02": (1.248974228s)
	I0916 10:54:36.226367  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:54:36.390991  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0916 10:54:36.391100  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-026168-m02 minikube.k8s.io/updated_at=2024_09_16T10_54_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-026168 minikube.k8s.io/primary=false
	I0916 10:54:36.460188  150386 command_runner.go:130] > node/multinode-026168-m02 labeled
	I0916 10:54:36.462930  150386 start.go:319] duration metric: took 1.650478524s to joinCluster
	I0916 10:54:36.463021  150386 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:54:36.463283  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:36.464831  150386 out.go:177] * Verifying Kubernetes components...
	I0916 10:54:36.466257  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:54:36.546582  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:54:36.558067  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:54:36.558320  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:54:36.558583  150386 node_ready.go:35] waiting up to 6m0s for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:54:36.558672  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:36.558683  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:36.558693  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:36.558699  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:36.561008  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:36.561026  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:36.561033  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:36.561036  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:36.561039  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:36 GMT
	I0916 10:54:36.561042  150386 round_trippers.go:580]     Audit-Id: 8f46dc76-ad7f-4da6-9680-019ddaa49119
	I0916 10:54:36.561046  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:36.561049  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:36.561236  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:37.058856  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:37.058880  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:37.058888  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:37.058893  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:37.060924  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:37.060944  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:37.060949  150386 round_trippers.go:580]     Audit-Id: 20752ce4-2144-4c1d-ad86-b2a8ceeaebe9
	I0916 10:54:37.060953  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:37.060956  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:37.060959  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:37.060961  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:37.060967  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:37 GMT
	I0916 10:54:37.061139  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:37.558759  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:37.558784  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:37.558791  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:37.558796  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:37.560947  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:37.560987  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:37.560997  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:37.561007  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:37.561012  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:37 GMT
	I0916 10:54:37.561018  150386 round_trippers.go:580]     Audit-Id: fc35c5c3-40bc-4e37-8dac-a02b2a41e9c0
	I0916 10:54:37.561022  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:37.561026  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:37.561125  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:38.059797  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:38.059821  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:38.059837  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:38.059846  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:38.063841  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:38.063870  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:38.063879  150386 round_trippers.go:580]     Audit-Id: a1919aa4-dc0b-4bf0-ab2b-38f72f0b0aa1
	I0916 10:54:38.063885  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:38.063891  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:38.063895  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:38.063900  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:38.063903  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:38 GMT
	I0916 10:54:38.064015  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:38.559776  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:38.559801  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:38.559809  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:38.559814  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:38.562182  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:38.562203  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:38.562211  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:38.562217  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:38.562221  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:38.562226  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:38.562229  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:38 GMT
	I0916 10:54:38.562233  150386 round_trippers.go:580]     Audit-Id: 094963d1-4f41-4386-bddc-a015db6d34d7
	I0916 10:54:38.562393  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:38.562733  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:39.058978  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:39.058997  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:39.059005  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:39.059009  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:39.061133  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:39.061151  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:39.061158  150386 round_trippers.go:580]     Audit-Id: f2fc3ed4-f50e-4e63-9c7a-d99444e39cd3
	I0916 10:54:39.061161  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:39.061165  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:39.061170  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:39.061174  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:39.061177  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:39 GMT
	I0916 10:54:39.061360  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:39.559785  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:39.559821  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:39.559833  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:39.559841  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:39.561493  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:39.561519  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:39.561529  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:39.561534  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:39 GMT
	I0916 10:54:39.561538  150386 round_trippers.go:580]     Audit-Id: a558e1f5-c0a8-49d9-979e-80f53674df2f
	I0916 10:54:39.561543  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:39.561548  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:39.561551  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:39.561740  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:40.059712  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:40.059734  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:40.059742  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:40.059746  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:40.062048  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:40.062074  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:40.062084  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:40.062089  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:40.062093  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:40 GMT
	I0916 10:54:40.062096  150386 round_trippers.go:580]     Audit-Id: 80503784-f557-4228-9717-6994d3d05b4f
	I0916 10:54:40.062100  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:40.062104  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:40.062271  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:40.558882  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:40.558914  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:40.558926  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:40.558930  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:40.561019  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:40.561040  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:40.561048  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:40.561054  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:40.561058  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:40.561062  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:40.561065  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:40 GMT
	I0916 10:54:40.561069  150386 round_trippers.go:580]     Audit-Id: f7d334f6-00c5-40a9-a183-175e0af44ddc
	I0916 10:54:40.561188  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:41.058836  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:41.058863  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:41.058871  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:41.058877  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:41.060988  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:41.061006  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:41.061012  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:41 GMT
	I0916 10:54:41.061016  150386 round_trippers.go:580]     Audit-Id: f15bd309-4105-4d0a-9630-4f241689b355
	I0916 10:54:41.061019  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:41.061023  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:41.061027  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:41.061032  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:41.061199  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:41.061532  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:41.559228  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:41.559252  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:41.559260  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:41.559266  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:41.561628  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:41.561651  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:41.561657  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:41 GMT
	I0916 10:54:41.561660  150386 round_trippers.go:580]     Audit-Id: 5ce89702-487f-452a-a40c-b44caae40ad6
	I0916 10:54:41.561663  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:41.561667  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:41.561670  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:41.561674  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:41.561863  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:42.059628  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:42.059655  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:42.059664  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:42.059668  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:42.061852  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:42.061876  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:42.061885  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:42.061889  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:42.061892  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:42 GMT
	I0916 10:54:42.061898  150386 round_trippers.go:580]     Audit-Id: f7735a59-280c-4558-a3ce-24e8a69394c7
	I0916 10:54:42.061903  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:42.061909  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:42.062063  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:42.558895  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:42.558924  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:42.558932  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:42.558937  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:42.561242  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:42.561264  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:42.561273  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:42.561279  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:42.561283  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:42.561287  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:42.561291  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:42 GMT
	I0916 10:54:42.561296  150386 round_trippers.go:580]     Audit-Id: 13e11d2a-bb0b-41f9-b7fa-7c53c43f221c
	I0916 10:54:42.561489  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:43.059093  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:43.059117  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:43.059124  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:43.059129  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:43.061547  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:43.061567  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:43.061574  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:43 GMT
	I0916 10:54:43.061581  150386 round_trippers.go:580]     Audit-Id: c5042587-91e3-427b-8603-77d3661b2276
	I0916 10:54:43.061586  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:43.061590  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:43.061594  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:43.061600  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:43.061771  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:43.062094  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:43.559575  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:43.559605  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:43.559614  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:43.559620  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:43.562211  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:43.562237  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:43.562247  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:43 GMT
	I0916 10:54:43.562252  150386 round_trippers.go:580]     Audit-Id: bfd8433c-34d9-47de-9883-d42ad0978123
	I0916 10:54:43.562258  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:43.562264  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:43.562269  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:43.562273  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:43.562442  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:44.058974  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:44.059000  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:44.059013  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:44.059017  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:44.061362  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:44.061383  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:44.061391  150386 round_trippers.go:580]     Audit-Id: 8f5e8125-cd52-4c90-922e-8fcde03efd6e
	I0916 10:54:44.061397  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:44.061403  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:44.061407  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:44.061410  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:44.061414  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:44 GMT
	I0916 10:54:44.061577  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:44.559156  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:44.559187  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:44.559197  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:44.559202  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:44.561479  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:44.561502  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:44.561508  150386 round_trippers.go:580]     Audit-Id: b0fc223c-7adf-45d1-8010-3cb4321a899d
	I0916 10:54:44.561512  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:44.561516  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:44.561519  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:44.561522  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:44.561524  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:44 GMT
	I0916 10:54:44.561760  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:45.059527  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:45.059554  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:45.059562  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:45.059568  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:45.062061  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:45.062082  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:45.062088  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:45.062092  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:45.062096  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:45.062098  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:45 GMT
	I0916 10:54:45.062101  150386 round_trippers.go:580]     Audit-Id: 68b32f38-eca8-4ef1-9a22-804d03651568
	I0916 10:54:45.062104  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:45.062286  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:45.062613  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:45.558926  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:45.558948  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:45.558956  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:45.558959  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:45.561097  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:45.561119  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:45.561127  150386 round_trippers.go:580]     Audit-Id: a0c59747-e495-48a3-b73c-18eb719b469f
	I0916 10:54:45.561133  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:45.561137  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:45.561142  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:45.561147  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:45.561151  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:45 GMT
	I0916 10:54:45.561310  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:46.058890  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:46.058920  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:46.058931  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:46.058937  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:46.061211  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:46.061234  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:46.061244  150386 round_trippers.go:580]     Audit-Id: e4c4262b-3cb8-4ac1-a365-135991a926cb
	I0916 10:54:46.061251  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:46.061257  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:46.061263  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:46.061269  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:46.061274  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:46 GMT
	I0916 10:54:46.061410  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:46.559277  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:46.559301  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:46.559311  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:46.559318  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:46.562075  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:46.562096  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:46.562104  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:46.562110  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:46.562116  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:46.562122  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:46.562128  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:46 GMT
	I0916 10:54:46.562133  150386 round_trippers.go:580]     Audit-Id: 7790a5c4-e4ad-4dcb-a279-e32196f4ce24
	I0916 10:54:46.562401  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:47.059356  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:47.059389  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:47.059400  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:47.059405  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:47.061596  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:47.061621  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:47.061630  150386 round_trippers.go:580]     Audit-Id: bd72d213-b993-4f2e-a72e-bd57a0e93532
	I0916 10:54:47.061638  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:47.061643  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:47.061648  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:47.061653  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:47.061657  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:47 GMT
	I0916 10:54:47.061868  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:47.559518  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:47.559544  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:47.559553  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:47.559560  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:47.561928  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:47.561948  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:47.561955  150386 round_trippers.go:580]     Audit-Id: e2c3457d-da66-4879-a51c-b83355d8be98
	I0916 10:54:47.561958  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:47.561961  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:47.561965  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:47.561968  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:47.561970  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:47 GMT
	I0916 10:54:47.562158  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:47.562483  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:48.058791  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:48.058813  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:48.058821  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:48.058833  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:48.060746  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:48.060768  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:48.060776  150386 round_trippers.go:580]     Audit-Id: 4481350a-d61e-4437-a9b2-62502ba2f9d9
	I0916 10:54:48.060783  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:48.060788  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:48.060794  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:48.060798  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:48.060802  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:48 GMT
	I0916 10:54:48.060976  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:48.559733  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:48.559765  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:48.559775  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:48.559781  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:48.562175  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:48.562200  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:48.562207  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:48.562211  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:48.562213  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:48.562216  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:48.562219  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:48 GMT
	I0916 10:54:48.562221  150386 round_trippers.go:580]     Audit-Id: b00640ac-a1e7-4281-bb8e-112d8e2c8f12
	I0916 10:54:48.562391  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:49.059002  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:49.059034  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.059043  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.059047  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.061464  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.061484  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.061493  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.061497  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.061500  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.061503  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.061506  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.061508  150386 round_trippers.go:580]     Audit-Id: 1e9847af-1950-4064-bb0f-c79ac1adf35f
	I0916 10:54:49.061700  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"489","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5855 chars]
	I0916 10:54:49.062013  150386 node_ready.go:49] node "multinode-026168-m02" has status "Ready":"True"
	I0916 10:54:49.062028  150386 node_ready.go:38] duration metric: took 12.503428835s for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:54:49.062036  150386 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:49.062099  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:49.062111  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.062118  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.062123  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.064914  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.064943  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.064953  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.064960  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.064967  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.064974  150386 round_trippers.go:580]     Audit-Id: b951a70a-3787-4e5f-a6d2-75a1ff6b3c9d
	I0916 10:54:49.064980  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.064985  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.065555  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 74117 chars]
	I0916 10:54:49.067794  150386 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.067870  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:54:49.067878  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.067885  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.067889  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.069615  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.069630  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.069637  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.069642  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.069645  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.069647  150386 round_trippers.go:580]     Audit-Id: 823ce3ed-d62f-415c-a2f0-7f031d7725c3
	I0916 10:54:49.069650  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.069653  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.069893  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:54:49.070315  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.070328  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.070335  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.070338  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.072071  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.072090  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.072098  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.072103  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.072110  150386 round_trippers.go:580]     Audit-Id: 0dd16ca9-9963-4fa5-87e8-efa78837dd4c
	I0916 10:54:49.072115  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.072119  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.072130  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.072221  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.072556  150386 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.072575  150386 pod_ready.go:82] duration metric: took 4.758808ms for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.072586  150386 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.072638  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:54:49.072645  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.072652  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.072655  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.074334  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.074349  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.074358  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.074363  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.074370  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.074376  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.074380  150386 round_trippers.go:580]     Audit-Id: b4481839-1437-4079-a7a3-671987eb810d
	I0916 10:54:49.074384  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.074527  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"382","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6435 chars]
	I0916 10:54:49.074921  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.074936  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.074942  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.074947  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.076515  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.076533  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.076545  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.076551  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.076600  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.076615  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.076620  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.076626  150386 round_trippers.go:580]     Audit-Id: b17f7e59-8743-4a30-ac57-f79a52e5f01e
	I0916 10:54:49.076745  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.077123  150386 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.077141  150386 pod_ready.go:82] duration metric: took 4.549084ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.077158  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.077235  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:54:49.077243  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.077252  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.077261  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.078953  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.078970  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.078976  150386 round_trippers.go:580]     Audit-Id: 155ff9fe-5eaf-4de7-82af-3c85987dcef5
	I0916 10:54:49.078980  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.078984  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.078988  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.078990  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.078993  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.079177  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"384","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8513 chars]
	I0916 10:54:49.079576  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.079610  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.079617  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.079621  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.081136  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.081154  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.081163  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.081169  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.081175  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.081182  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.081186  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.081189  150386 round_trippers.go:580]     Audit-Id: 1602fedf-bf13-4ecd-9692-278af862ff3f
	I0916 10:54:49.081302  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.081726  150386 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.081748  150386 pod_ready.go:82] duration metric: took 4.578295ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.081760  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.081824  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:54:49.081835  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.081845  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.081852  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.083444  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.083458  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.083466  150386 round_trippers.go:580]     Audit-Id: ebadba2b-bdfe-4fc2-a1fd-95dfef4b8dca
	I0916 10:54:49.083472  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.083476  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.083485  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.083492  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.083496  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.083638  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"380","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8088 chars]
	I0916 10:54:49.084042  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.084054  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.084061  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.084065  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.085772  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.085793  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.085802  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.085808  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.085812  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.085818  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.085826  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.085832  150386 round_trippers.go:580]     Audit-Id: f3fd8af6-6b47-4fd2-8488-d5ae87a3a9ef
	I0916 10:54:49.085967  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.086247  150386 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.086261  150386 pod_ready.go:82] duration metric: took 4.494011ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.086273  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.259738  150386 request.go:632] Waited for 173.387504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:49.259815  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:49.259823  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.259833  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.259846  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.263334  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:49.263361  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.263372  150386 round_trippers.go:580]     Audit-Id: bf9492a2-83f6-43bf-b39c-837cb3fc7da5
	I0916 10:54:49.263376  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.263382  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.263387  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.263392  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.263397  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.263524  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"348","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:54:49.459341  150386 request.go:632] Waited for 195.349446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.459428  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.459442  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.459450  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.459455  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.461786  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.461806  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.461812  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.461815  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.461818  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.461821  150386 round_trippers.go:580]     Audit-Id: 1df62ff4-e821-41ac-888b-d13b62fe90cb
	I0916 10:54:49.461825  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.461829  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.461979  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.462297  150386 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.462310  150386 pod_ready.go:82] duration metric: took 376.031746ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.462321  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.659605  150386 request.go:632] Waited for 197.202161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:54:49.659663  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:54:49.659670  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.659680  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.659692  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.662224  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.662250  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.662260  150386 round_trippers.go:580]     Audit-Id: 4275301e-b3c5-412d-80f5-fdfb3775bb15
	I0916 10:54:49.662266  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.662271  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.662276  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.662280  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.662285  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.662438  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qds2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac30bd54-b932-4f52-a53c-4edbc5eefc7c","resourceVersion":"475","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:54:49.859219  150386 request.go:632] Waited for 196.277089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:49.859292  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:49.859299  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.859309  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.859314  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.861645  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.861668  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.861676  150386 round_trippers.go:580]     Audit-Id: 26bb0cab-ad8d-466c-b3f9-d15f0036fc7b
	I0916 10:54:49.861682  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.861688  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.861694  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.861699  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.861703  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.861798  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"489","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5855 chars]
	I0916 10:54:49.862131  150386 pod_ready.go:93] pod "kube-proxy-qds2d" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.862148  150386 pod_ready.go:82] duration metric: took 399.820491ms for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.862157  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:50.059477  150386 request.go:632] Waited for 197.252131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:50.059552  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:50.059560  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:50.059571  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:50.059580  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:50.062187  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:50.062227  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:50.062238  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:50.062245  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:50.062248  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:50.062251  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:50.062254  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:50 GMT
	I0916 10:54:50.062259  150386 round_trippers.go:580]     Audit-Id: 2416b370-bac5-4800-9b0b-8766b1fc1ef1
	I0916 10:54:50.062374  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"377","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4970 chars]
	I0916 10:54:50.259053  150386 request.go:632] Waited for 196.284331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:50.259125  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:50.259131  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:50.259142  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:50.259148  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:50.261248  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:50.261270  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:50.261280  150386 round_trippers.go:580]     Audit-Id: defb4f0c-b73f-4075-9dc0-d352e539d7c6
	I0916 10:54:50.261289  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:50.261292  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:50.261296  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:50.261300  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:50.261306  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:50 GMT
	I0916 10:54:50.261428  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:50.261831  150386 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:50.261855  150386 pod_ready.go:82] duration metric: took 399.68992ms for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:50.261870  150386 pod_ready.go:39] duration metric: took 1.199818746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:50.261891  150386 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:54:50.261955  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:54:50.272993  150386 system_svc.go:56] duration metric: took 11.095119ms WaitForService to wait for kubelet
	I0916 10:54:50.273031  150386 kubeadm.go:582] duration metric: took 13.809973833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:50.273053  150386 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:54:50.459512  150386 request.go:632] Waited for 186.380571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:50.459582  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:50.459593  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:50.459604  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:50.459610  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:50.463070  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:50.463096  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:50.463106  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:50.463112  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:50.463116  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:50.463121  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:50.463124  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:50 GMT
	I0916 10:54:50.463127  150386 round_trippers.go:580]     Audit-Id: 072c69b0-e276-417b-bb13-fd249f20d557
	I0916 10:54:50.463389  150386 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"490"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12847 chars]
	I0916 10:54:50.463881  150386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:54:50.463901  150386 node_conditions.go:123] node cpu capacity is 8
	I0916 10:54:50.463911  150386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:54:50.463915  150386 node_conditions.go:123] node cpu capacity is 8
	I0916 10:54:50.463919  150386 node_conditions.go:105] duration metric: took 190.861345ms to run NodePressure ...
	I0916 10:54:50.463931  150386 start.go:241] waiting for startup goroutines ...
	I0916 10:54:50.463953  150386 start.go:255] writing updated cluster config ...
	I0916 10:54:50.464253  150386 ssh_runner.go:195] Run: rm -f paused
	I0916 10:54:50.471683  150386 out.go:177] * Done! kubectl is now configured to use "multinode-026168" cluster and "default" namespace by default
	E0916 10:54:50.472866  150386 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:54:21 multinode-026168 crio[1036]: time="2024-09-16 10:54:21.408315288Z" level=info msg="Created container dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77: kube-system/coredns-7c65d6cfc9-s82cx/coredns" id=6d7639d5-6284-471f-af47-76a746536a1c name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:54:21 multinode-026168 crio[1036]: time="2024-09-16 10:54:21.408836594Z" level=info msg="Starting container: dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77" id=44368356-a791-46fb-a41d-2d3aba74d249 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:54:21 multinode-026168 crio[1036]: time="2024-09-16 10:54:21.417157923Z" level=info msg="Started container" PID=2271 containerID=dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77 description=kube-system/coredns-7c65d6cfc9-s82cx/coredns id=44368356-a791-46fb-a41d-2d3aba74d249 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c78b727e5ddd75d14e74c37444e462ebaceacb4ec9574635898675863c49c63c
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.439874619Z" level=info msg="Running pod sandbox: default/busybox-7dff88458-qt9rx/POD" id=49a45ee6-4723-4eb8-a4a5-5408040f0b07 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.439957676Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.453709535Z" level=info msg="Got pod network &{Name:busybox-7dff88458-qt9rx Namespace:default ID:13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa UID:d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3 NetNS:/var/run/netns/8450bee4-4a0f-46f6-aaa4-467251c3a5fa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.453739955Z" level=info msg="Adding pod default_busybox-7dff88458-qt9rx to CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.463019830Z" level=info msg="Got pod network &{Name:busybox-7dff88458-qt9rx Namespace:default ID:13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa UID:d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3 NetNS:/var/run/netns/8450bee4-4a0f-46f6-aaa4-467251c3a5fa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.463183285Z" level=info msg="Checking pod default_busybox-7dff88458-qt9rx for CNI network kindnet (type=ptp)"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.466461833Z" level=info msg="Ran pod sandbox 13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa with infra container: default/busybox-7dff88458-qt9rx/POD" id=49a45ee6-4723-4eb8-a4a5-5408040f0b07 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.467790445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=dd295c8b-dd40-479c-8547-e47a36230620 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.468038208Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=dd295c8b-dd40-479c-8547-e47a36230620 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.468952675Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7a209516-0b9d-4b6a-8357-bce55a486fff name=/runtime.v1.ImageService/PullImage
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.481368131Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:54:52 multinode-026168 crio[1036]: time="2024-09-16 10:54:52.332298901Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.205037193Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=7a209516-0b9d-4b6a-8357-bce55a486fff name=/runtime.v1.ImageService/PullImage
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.205834803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c834c4c1-abc5-47c2-b7a4-2fc9f75dcf48 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.206435687Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c834c4c1-abc5-47c2-b7a4-2fc9f75dcf48 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.207079460Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=cb50c416-41d5-4f4b-a655-678f57d80e69 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.207665053Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cb50c416-41d5-4f4b-a655-678f57d80e69 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.208315074Z" level=info msg="Creating container: default/busybox-7dff88458-qt9rx/busybox" id=dc98a5fe-66e1-4e11-9cce-ca804e1c1a75 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.208404944Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.251872423Z" level=info msg="Created container 83607811f9eb9ab48e0ee8d2c2a26d4614a56aa450821115b45ddf3d89706b72: default/busybox-7dff88458-qt9rx/busybox" id=dc98a5fe-66e1-4e11-9cce-ca804e1c1a75 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.252570812Z" level=info msg="Starting container: 83607811f9eb9ab48e0ee8d2c2a26d4614a56aa450821115b45ddf3d89706b72" id=a4a11726-fef5-4a87-b15b-84033f4e1b59 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.258426207Z" level=info msg="Started container" PID=2437 containerID=83607811f9eb9ab48e0ee8d2c2a26d4614a56aa450821115b45ddf3d89706b72 description=default/busybox-7dff88458-qt9rx/busybox id=a4a11726-fef5-4a87-b15b-84033f4e1b59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	83607811f9eb9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   29 seconds ago       Running             busybox                   0                   13510cd1f1581       busybox-7dff88458-qt9rx
	dd488a7986689       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   0                   c78b727e5ddd7       coredns-7c65d6cfc9-s82cx
	8913755836cdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   d3faa1e799926       storage-provisioner
	94f816a173a35       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                0                   9ceb1b5d5a981       kube-proxy-6p6vt
	031615b88b45c       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               0                   bf2205a75f62c       kindnet-zv2p5
	8a997c9857a33       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   0                   f3e447b209d6f       kube-controller-manager-multinode-026168
	fd0447db4a560       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            0                   4a4735b8eefdf       kube-apiserver-multinode-026168
	974f8e8c18191       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            0                   28d0d26f8e186       kube-scheduler-multinode-026168
	62d269db79164       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      0                   123e0f4195c8e       etcd-multinode-026168
	
	
	==> coredns [dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77] <==
	[INFO] 10.244.0.3:51187 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094239s
	[INFO] 10.244.1.2:45181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146935s
	[INFO] 10.244.1.2:59156 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002005499s
	[INFO] 10.244.1.2:39395 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101088s
	[INFO] 10.244.1.2:42528 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093197s
	[INFO] 10.244.1.2:33187 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001513712s
	[INFO] 10.244.1.2:33143 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127202s
	[INFO] 10.244.1.2:47467 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006244s
	[INFO] 10.244.1.2:56932 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085407s
	[INFO] 10.244.0.3:51617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140403s
	[INFO] 10.244.0.3:48759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081554s
	[INFO] 10.244.0.3:37584 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090321s
	[INFO] 10.244.0.3:59186 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065092s
	[INFO] 10.244.1.2:36167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135451s
	[INFO] 10.244.1.2:59973 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099974s
	[INFO] 10.244.1.2:58529 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060588s
	[INFO] 10.244.1.2:53665 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006045s
	[INFO] 10.244.0.3:45471 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119597s
	[INFO] 10.244.0.3:51073 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168151s
	[INFO] 10.244.0.3:37620 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118832s
	[INFO] 10.244.0.3:38968 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098574s
	[INFO] 10.244.1.2:41991 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140828s
	[INFO] 10.244.1.2:56798 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116768s
	[INFO] 10.244.1.2:55463 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080791s
	[INFO] 10.244.1.2:37704 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058286s
	
	
	==> describe nodes <==
	Name:               multinode-026168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_53_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:53:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-026168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 abcf2b5c41114d64bb158d3abc1bc1e7
	  System UUID:                8db2fd04-b5e4-4ec7-8d8e-d94280ac94a3
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qt9rx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 coredns-7c65d6cfc9-s82cx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-multinode-026168                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-zv2p5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-multinode-026168             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-multinode-026168    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-6p6vt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-multinode-026168             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 103s  kube-proxy       
	  Normal   Starting                 109s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 109s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  109s  kubelet          Node multinode-026168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    109s  kubelet          Node multinode-026168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s  kubelet          Node multinode-026168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           105s  node-controller  Node multinode-026168 event: Registered Node multinode-026168 in Controller
	  Normal   NodeReady                63s   kubelet          Node multinode-026168 status is now: NodeReady
	
	
	Name:               multinode-026168-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_54_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:54:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-026168-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 7732a396f8244d84817f5f8cac803842
	  System UUID:                50f4fbf1-c6a3-4700-a79b-bb8841197877
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z8csk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kindnet-mckv5              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      47s
	  kube-system                 kube-proxy-qds2d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 45s                kube-proxy       
	  Normal  NodeHasSufficientMemory  47s (x2 over 48s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x2 over 48s)  kubelet          Node multinode-026168-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x2 over 48s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node multinode-026168-m02 event: Registered Node multinode-026168-m02 in Controller
	  Normal  NodeReady                35s                kubelet          Node multinode-026168-m02 status is now: NodeReady
	
	
	Name:               multinode-026168-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_55_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:55:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.4
	  Hostname:    multinode-026168-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdc0e75be17e4d3f9f9899b448a95dc1
	  System UUID:                df965121-0c57-4bf4-8c99-f55a28f729db
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2jtzj       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17s
	  kube-system                 kube-proxy-g86bs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x2 over 17s)  kubelet          Node multinode-026168-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x2 over 17s)  kubelet          Node multinode-026168-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x2 over 17s)  kubelet          Node multinode-026168-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                node-controller  Node multinode-026168-m03 event: Registered Node multinode-026168-m03 in Controller
	  Normal  NodeReady                3s                 kubelet          Node multinode-026168-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.095980] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004016] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +1.915832] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +4.031681] net_ratelimit: 5 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000002] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.255941] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000001] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004022] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +7.931402] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000002] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004224] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.251741] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000008] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [62d269db791644dfcf7b38f0bcb3db1a486dd899cb5b8b1a7653839af3df554b] <==
	{"level":"info","ts":"2024-09-16T10:53:29.693859Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:53:29.693990Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:53:29.694028Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:53:29.634144Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:53:29.694641Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:53:29.821369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:53:29.821424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:53:29.821469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-09-16T10:53:29.821484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.821508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.821523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.821531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.822566Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.823322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:53:29.823322Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-026168 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:53:29.823391Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:53:29.823637Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:53:29.823662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:53:29.823745Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.823835Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.823877Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.824554Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:53:29.824646Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:53:29.825426Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:53:29.825454Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 10:55:24 up 37 min,  0 users,  load average: 0.79, 1.32, 1.00
	Linux multinode-026168 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [031615b88b45c13559d669c660a76b43765997b0548fcfa19fa2eea1c71beffc] <==
	I0916 10:54:40.594402       1 main.go:299] handling current node
	I0916 10:54:40.594418       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:54:40.594423       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:54:40.594629       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0} 
	I0916 10:54:50.594519       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:54:50.594569       1 main.go:299] handling current node
	I0916 10:54:50.594584       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:54:50.594589       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:00.597422       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:55:00.597510       1 main.go:299] handling current node
	I0916 10:55:00.597526       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:55:00.597531       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:10.594561       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:55:10.594641       1 main.go:299] handling current node
	I0916 10:55:10.594660       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:55:10.594668       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:10.594825       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:55:10.594846       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:55:10.594904       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.67.4 Flags: [] Table: 0} 
	I0916 10:55:20.594146       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:55:20.594181       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:20.594304       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:55:20.594312       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:55:20.594374       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:55:20.594386       1 main.go:299] handling current node
	
	
	==> kube-apiserver [fd0447db4a560a60ebcfda53d853a3e402c5897ca07bff9ef1397e4a880e4a17] <==
	I0916 10:53:32.762598       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:53:32.762617       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:53:33.235614       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:53:33.282392       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:53:33.363208       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:53:33.369405       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0916 10:53:33.370718       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:53:33.375137       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:53:33.817118       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:53:34.460123       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:53:34.469876       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:53:34.477929       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:53:38.819211       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:53:39.469789       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:53:39.469789       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 10:54:55.336047       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45634: use of closed network connection
	E0916 10:54:55.492122       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45658: use of closed network connection
	E0916 10:54:55.656364       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45670: use of closed network connection
	E0916 10:54:55.808526       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45686: use of closed network connection
	E0916 10:54:55.959936       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45708: use of closed network connection
	E0916 10:54:56.106197       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45722: use of closed network connection
	E0916 10:54:56.362300       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45748: use of closed network connection
	E0916 10:54:56.509506       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45772: use of closed network connection
	E0916 10:54:56.653981       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45792: use of closed network connection
	E0916 10:54:56.801852       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45818: use of closed network connection
	
	
	==> kube-controller-manager [8a997c9857a33b254e0f727760c626327173dac57074563809c3087a43fee71e] <==
	I0916 10:54:51.193948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.348493ms"
	I0916 10:54:51.206532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.529479ms"
	I0916 10:54:51.206693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.307µs"
	I0916 10:54:53.585018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m02"
	I0916 10:54:54.577751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.687799ms"
	I0916 10:54:54.577867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.823µs"
	I0916 10:54:54.935158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.559253ms"
	I0916 10:54:54.935240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.35µs"
	I0916 10:55:05.944960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168"
	I0916 10:55:06.799587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m02"
	I0916 10:55:06.906228       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-026168-m03\" does not exist"
	I0916 10:55:06.906316       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-026168-m02"
	I0916 10:55:06.911934       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-026168-m03" podCIDRs=["10.244.2.0/24"]
	I0916 10:55:06.911967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:06.912035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:06.918648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:06.950803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:07.172627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:08.587705       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-026168-m03"
	I0916 10:55:08.626743       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:16.977212       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:20.053468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-026168-m02"
	I0916 10:55:20.053524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:20.062234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:23.600713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	
	
	==> kube-proxy [94f816a173a351d394edbe3db69798d9d3bc38225a8c8fda39ab554294fee17a] <==
	I0916 10:53:39.937150       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:53:40.100667       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:53:40.100747       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:53:40.214258       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:53:40.214346       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:53:40.216756       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:53:40.217451       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:53:40.217487       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:53:40.218742       1 config.go:199] "Starting service config controller"
	I0916 10:53:40.218780       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:53:40.218818       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:53:40.218829       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:53:40.219300       1 config.go:328] "Starting node config controller"
	I0916 10:53:40.219318       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:53:40.319906       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:53:40.319921       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:53:40.319994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [974f8e8c181912c331a9a90b937ad165217c9646d4dd4d80b604897509dbf716] <==
	W0916 10:53:31.918636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:31.918654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:31.918641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:53:31.918729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:53:31.918750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0916 10:53:31.918723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.771084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:53:32.771123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.817498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:53:32.817541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.858871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:53:32.858918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.860931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:53:32.860968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.908547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:53:32.908621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.912926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:32.912974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.967806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:32.967853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:33.036648       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:53:33.036691       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:53:33.056529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:33.056631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:53:36.015633       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:54:04 multinode-026168 kubelet[1653]: E0916 10:54:04.414016    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484044413794216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:04 multinode-026168 kubelet[1653]: E0916 10:54:04.414059    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484044413794216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:14 multinode-026168 kubelet[1653]: E0916 10:54:14.415200    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484054415024302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:14 multinode-026168 kubelet[1653]: E0916 10:54:14.415240    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484054415024302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:20 multinode-026168 kubelet[1653]: I0916 10:54:20.990983    1653 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.191174    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7-tmp\") pod \"storage-provisioner\" (UID: \"ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.191236    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkn6c\" (UniqueName: \"kubernetes.io/projected/85130138-c50d-47a8-8bbe-de91bb9a0472-kube-api-access-tkn6c\") pod \"coredns-7c65d6cfc9-s82cx\" (UID: \"85130138-c50d-47a8-8bbe-de91bb9a0472\") " pod="kube-system/coredns-7c65d6cfc9-s82cx"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.191267    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvkl9\" (UniqueName: \"kubernetes.io/projected/ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7-kube-api-access-xvkl9\") pod \"storage-provisioner\" (UID: \"ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.191292    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85130138-c50d-47a8-8bbe-de91bb9a0472-config-volume\") pod \"coredns-7c65d6cfc9-s82cx\" (UID: \"85130138-c50d-47a8-8bbe-de91bb9a0472\") " pod="kube-system/coredns-7c65d6cfc9-s82cx"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.505008    1653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-s82cx" podStartSLOduration=42.504986406 podStartE2EDuration="42.504986406s" podCreationTimestamp="2024-09-16 10:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:54:21.50471358 +0000 UTC m=+47.256797721" watchObservedRunningTime="2024-09-16 10:54:21.504986406 +0000 UTC m=+47.257070546"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.513711    1653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.513685569 podStartE2EDuration="41.513685569s" podCreationTimestamp="2024-09-16 10:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:54:21.513315134 +0000 UTC m=+47.265399272" watchObservedRunningTime="2024-09-16 10:54:21.513685569 +0000 UTC m=+47.265769736"
	Sep 16 10:54:24 multinode-026168 kubelet[1653]: E0916 10:54:24.416435    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484064416225651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:24 multinode-026168 kubelet[1653]: E0916 10:54:24.416481    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484064416225651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:34 multinode-026168 kubelet[1653]: E0916 10:54:34.417526    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484074417280895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:34 multinode-026168 kubelet[1653]: E0916 10:54:34.417568    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484074417280895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:44 multinode-026168 kubelet[1653]: E0916 10:54:44.418580    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484084418371436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:44 multinode-026168 kubelet[1653]: E0916 10:54:44.418615    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484084418371436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:51 multinode-026168 kubelet[1653]: I0916 10:54:51.294084    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5cs\" (UniqueName: \"kubernetes.io/projected/d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3-kube-api-access-ts5cs\") pod \"busybox-7dff88458-qt9rx\" (UID: \"d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3\") " pod="default/busybox-7dff88458-qt9rx"
	Sep 16 10:54:54 multinode-026168 kubelet[1653]: E0916 10:54:54.420224    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484094420002220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:54 multinode-026168 kubelet[1653]: E0916 10:54:54.420272    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484094420002220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:54 multinode-026168 kubelet[1653]: I0916 10:54:54.571314    1653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-qt9rx" podStartSLOduration=0.832867296 podStartE2EDuration="3.571296442s" podCreationTimestamp="2024-09-16 10:54:51 +0000 UTC" firstStartedPulling="2024-09-16 10:54:51.46822216 +0000 UTC m=+77.220306291" lastFinishedPulling="2024-09-16 10:54:54.2066513 +0000 UTC m=+79.958735437" observedRunningTime="2024-09-16 10:54:54.571036616 +0000 UTC m=+80.323120755" watchObservedRunningTime="2024-09-16 10:54:54.571296442 +0000 UTC m=+80.323380581"
	Sep 16 10:55:04 multinode-026168 kubelet[1653]: E0916 10:55:04.421472    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484104421256152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:04 multinode-026168 kubelet[1653]: E0916 10:55:04.421518    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484104421256152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:14 multinode-026168 kubelet[1653]: E0916 10:55:14.423046    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484114422875682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:14 multinode-026168 kubelet[1653]: E0916 10:55:14.423086    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484114422875682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-026168 -n multinode-026168
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (480.19µs)
helpers_test.go:263: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/MultiNodeLabels (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 node start m03 -v=7 --alsologtostderr: (8.429736641s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
multinode_test.go:306: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (463.69µs)
multinode_test.go:308: failed to kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-026168
helpers_test.go:235: (dbg) docker inspect multinode-026168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74",
	        "Created": "2024-09-16T10:53:21.752929602Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 151054,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:53:21.869714559Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hostname",
	        "HostsPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hosts",
	        "LogPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74-json.log",
	        "Name": "/multinode-026168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-026168:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-026168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-026168",
	                "Source": "/var/lib/docker/volumes/multinode-026168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-026168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-026168",
	                "name.minikube.sigs.k8s.io": "multinode-026168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7af9c28e5e64078796e260ddd459f762670a6f4dbc2efb9ece79d12ebff981c",
	            "SandboxKey": "/var/run/docker/netns/b7af9c28e5e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-026168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a5a173559814a989877e5b7826f3cf7f4df5f065fe1cdcc6350cf486bc64e678",
	                    "EndpointID": "4f9d887b0da816276a4cc9cb835cc6812b15d59e3eb718896f4150bf9e5d1a47",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-026168",
	                        "23ba806c0524"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-026168 -n multinode-026168
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 logs -n 25: (1.250123706s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-026168 cp multinode-026168:/home/docker/cp-test.txt                           | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03:/home/docker/cp-test_multinode-026168_multinode-026168-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168-m03 sudo cat                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168_multinode-026168-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp testdata/cp-test.txt                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168:/home/docker/cp-test_multinode-026168-m02_multinode-026168.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168 sudo cat                                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m02_multinode-026168.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03:/home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168-m03 sudo cat                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp testdata/cp-test.txt                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168:/home/docker/cp-test_multinode-026168-m03_multinode-026168.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168 sudo cat                                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m03_multinode-026168.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02:/home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168-m02 sudo cat                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-026168 node stop m03                                                          | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	| node    | multinode-026168 node start                                                             | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:53:16
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:53:16.240635  150386 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:16.240738  150386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:16.240743  150386 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:16.240747  150386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:16.240929  150386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:53:16.241499  150386 out.go:352] Setting JSON to false
	I0916 10:53:16.242411  150386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2136,"bootTime":1726481860,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:53:16.242505  150386 start.go:139] virtualization: kvm guest
	I0916 10:53:16.245004  150386 out.go:177] * [multinode-026168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:53:16.246642  150386 notify.go:220] Checking for updates...
	I0916 10:53:16.246654  150386 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:53:16.248057  150386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:53:16.249745  150386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:16.251336  150386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:53:16.252776  150386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:53:16.254106  150386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:53:16.255610  150386 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:53:16.277663  150386 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:53:16.277759  150386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:53:16.331858  150386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:53:16.322223407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:53:16.331964  150386 docker.go:318] overlay module found
	I0916 10:53:16.334087  150386 out.go:177] * Using the docker driver based on user configuration
	I0916 10:53:16.335429  150386 start.go:297] selected driver: docker
	I0916 10:53:16.335446  150386 start.go:901] validating driver "docker" against <nil>
	I0916 10:53:16.335457  150386 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:53:16.336234  150386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:53:16.383688  150386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:53:16.373943804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:53:16.383844  150386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:53:16.384051  150386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:53:16.385893  150386 out.go:177] * Using Docker driver with root privileges
	I0916 10:53:16.387506  150386 cni.go:84] Creating CNI manager for ""
	I0916 10:53:16.387550  150386 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:53:16.387559  150386 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:53:16.387651  150386 start.go:340] cluster config:
	{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:16.389477  150386 out.go:177] * Starting "multinode-026168" primary control-plane node in "multinode-026168" cluster
	I0916 10:53:16.391199  150386 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:53:16.393047  150386 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:53:16.394534  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:53:16.394579  150386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:53:16.394590  150386 cache.go:56] Caching tarball of preloaded images
	I0916 10:53:16.394653  150386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:53:16.394679  150386 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:53:16.394687  150386 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:53:16.395028  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:53:16.395053  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json: {Name:mk91cb70ae479e3389c4ae23dab5870b80a4399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:53:16.415170  150386 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:53:16.415191  150386 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:53:16.415291  150386 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:53:16.415312  150386 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:53:16.415318  150386 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:53:16.415328  150386 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:53:16.415335  150386 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:53:16.416531  150386 image.go:273] response: 
	I0916 10:53:16.473943  150386 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:53:16.474010  150386 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:53:16.474053  150386 start.go:360] acquireMachinesLock for multinode-026168: {Name:mk1016c8f1a43c2d6030796baf01aa33f86316e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:53:16.474190  150386 start.go:364] duration metric: took 109.669µs to acquireMachinesLock for "multinode-026168"
	I0916 10:53:16.474220  150386 start.go:93] Provisioning new machine with config: &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:53:16.474334  150386 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:53:16.476233  150386 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:53:16.476541  150386 start.go:159] libmachine.API.Create for "multinode-026168" (driver="docker")
	I0916 10:53:16.476574  150386 client.go:168] LocalClient.Create starting
	I0916 10:53:16.476652  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:53:16.476695  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:16.476712  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:16.476764  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:53:16.476799  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:16.476815  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:16.477238  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:53:16.494854  150386 cli_runner.go:211] docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:53:16.494927  150386 network_create.go:284] running [docker network inspect multinode-026168] to gather additional debugging logs...
	I0916 10:53:16.494974  150386 cli_runner.go:164] Run: docker network inspect multinode-026168
	W0916 10:53:16.515079  150386 cli_runner.go:211] docker network inspect multinode-026168 returned with exit code 1
	I0916 10:53:16.515125  150386 network_create.go:287] error running [docker network inspect multinode-026168]: docker network inspect multinode-026168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-026168 not found
	I0916 10:53:16.515144  150386 network_create.go:289] output of [docker network inspect multinode-026168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-026168 not found
	
	** /stderr **
	I0916 10:53:16.515299  150386 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:53:16.535537  150386 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 10:53:16.535947  150386 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 10:53:16.536373  150386 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018650a0}
	I0916 10:53:16.536394  150386 network_create.go:124] attempt to create docker network multinode-026168 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0916 10:53:16.536435  150386 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-026168 multinode-026168
	I0916 10:53:16.601989  150386 network_create.go:108] docker network multinode-026168 192.168.67.0/24 created
	I0916 10:53:16.602030  150386 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-026168" container
	I0916 10:53:16.602084  150386 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:53:16.619330  150386 cli_runner.go:164] Run: docker volume create multinode-026168 --label name.minikube.sigs.k8s.io=multinode-026168 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:53:16.637521  150386 oci.go:103] Successfully created a docker volume multinode-026168
	I0916 10:53:16.637606  150386 cli_runner.go:164] Run: docker run --rm --name multinode-026168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168 --entrypoint /usr/bin/test -v multinode-026168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:53:17.150042  150386 oci.go:107] Successfully prepared a docker volume multinode-026168
	I0916 10:53:17.150090  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:53:17.150115  150386 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:53:17.150171  150386 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:53:21.687566  150386 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.537347589s)
	I0916 10:53:21.687602  150386 kic.go:203] duration metric: took 4.537484242s to extract preloaded images to volume ...
	W0916 10:53:21.687727  150386 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:53:21.687818  150386 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:53:21.736769  150386 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-026168 --name multinode-026168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-026168 --network multinode-026168 --ip 192.168.67.2 --volume multinode-026168:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:53:22.041826  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Running}}
	I0916 10:53:22.060023  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:22.080328  150386 cli_runner.go:164] Run: docker exec multinode-026168 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:53:22.124480  150386 oci.go:144] the created container "multinode-026168" has a running status.
	I0916 10:53:22.124520  150386 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa...
	I0916 10:53:22.429223  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:53:22.429266  150386 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:53:22.452062  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:22.469125  150386 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:53:22.469147  150386 kic_runner.go:114] Args: [docker exec --privileged multinode-026168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:53:22.511759  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:22.531129  150386 machine.go:93] provisionDockerMachine start ...
	I0916 10:53:22.531206  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:22.551545  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:22.551837  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:22.551854  150386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:53:22.692713  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:53:22.692742  150386 ubuntu.go:169] provisioning hostname "multinode-026168"
	I0916 10:53:22.692805  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:22.712078  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:22.712291  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:22.712311  150386 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168 && echo "multinode-026168" | sudo tee /etc/hostname
	I0916 10:53:22.856873  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:53:22.856942  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:22.873834  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:22.874011  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:22.874030  150386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:53:23.005826  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:53:23.005858  150386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:53:23.005903  150386 ubuntu.go:177] setting up certificates
	I0916 10:53:23.005917  150386 provision.go:84] configureAuth start
	I0916 10:53:23.005973  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:53:23.022869  150386 provision.go:143] copyHostCerts
	I0916 10:53:23.022905  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:53:23.022933  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:53:23.022940  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:53:23.023003  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:53:23.023075  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:53:23.023095  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:53:23.023103  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:53:23.023128  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:53:23.023175  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:53:23.023196  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:53:23.023202  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:53:23.023222  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:53:23.023270  150386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-026168]
	I0916 10:53:23.137406  150386 provision.go:177] copyRemoteCerts
	I0916 10:53:23.137473  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:53:23.137511  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.159463  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.258647  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:53:23.258716  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:53:23.281767  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:53:23.281827  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 10:53:23.305959  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:53:23.306027  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:53:23.328819  150386 provision.go:87] duration metric: took 322.885907ms to configureAuth
	I0916 10:53:23.328850  150386 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:53:23.329034  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:53:23.329174  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.346526  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:53:23.346889  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I0916 10:53:23.346919  150386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:53:23.566448  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:53:23.566472  150386 machine.go:96] duration metric: took 1.035323474s to provisionDockerMachine
	I0916 10:53:23.566482  150386 client.go:171] duration metric: took 7.089900982s to LocalClient.Create
	I0916 10:53:23.566496  150386 start.go:167] duration metric: took 7.089959092s to libmachine.API.Create "multinode-026168"
	I0916 10:53:23.566503  150386 start.go:293] postStartSetup for "multinode-026168" (driver="docker")
	I0916 10:53:23.566511  150386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:53:23.566575  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:53:23.566612  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.583611  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.679163  150386 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:53:23.682571  150386 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:53:23.682594  150386 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:53:23.682600  150386 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:53:23.682606  150386 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:53:23.682613  150386 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:53:23.682617  150386 command_runner.go:130] > ID=ubuntu
	I0916 10:53:23.682620  150386 command_runner.go:130] > ID_LIKE=debian
	I0916 10:53:23.682625  150386 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:53:23.682630  150386 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:53:23.682637  150386 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:53:23.682644  150386 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:53:23.682651  150386 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:53:23.682706  150386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:53:23.682730  150386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:53:23.682738  150386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:53:23.682747  150386 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:53:23.682759  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:53:23.682817  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:53:23.682898  150386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:53:23.682912  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:53:23.683000  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:53:23.691650  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:53:23.713983  150386 start.go:296] duration metric: took 147.465039ms for postStartSetup
	I0916 10:53:23.714319  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:53:23.731359  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:53:23.731624  150386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:53:23.731662  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.748432  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.842021  150386 command_runner.go:130] > 30%
	I0916 10:53:23.842224  150386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:53:23.846641  150386 command_runner.go:130] > 205G
	I0916 10:53:23.846906  150386 start.go:128] duration metric: took 7.372555552s to createHost
	I0916 10:53:23.846930  150386 start.go:83] releasing machines lock for "multinode-026168", held for 7.372726341s
	I0916 10:53:23.847004  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:53:23.864775  150386 ssh_runner.go:195] Run: cat /version.json
	I0916 10:53:23.864823  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.864873  150386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:53:23.864929  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:23.883138  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:23.883396  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:24.044843  150386 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:53:24.047408  150386 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:53:24.047552  150386 ssh_runner.go:195] Run: systemctl --version
	I0916 10:53:24.051947  150386 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:53:24.051990  150386 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:53:24.052058  150386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:53:24.190794  150386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:53:24.194808  150386 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:53:24.194840  150386 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:53:24.194848  150386 command_runner.go:130] > Device: 37h/55d	Inode: 535096      Links: 1
	I0916 10:53:24.194866  150386 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:53:24.194875  150386 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:53:24.194884  150386 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:53:24.194891  150386 command_runner.go:130] > Change: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:53:24.194896  150386 command_runner.go:130] >  Birth: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:53:24.195105  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:53:24.213521  150386 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:53:24.213593  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:53:24.240626  150386 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:53:24.240701  150386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:53:24.240708  150386 start.go:495] detecting cgroup driver to use...
	I0916 10:53:24.240743  150386 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:53:24.240796  150386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:53:24.254870  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:53:24.265498  150386 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:53:24.265557  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:53:24.278044  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:53:24.291857  150386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:53:24.369500  150386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:53:24.447658  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:53:24.447701  150386 docker.go:233] disabling docker service ...
	I0916 10:53:24.447749  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:53:24.465271  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:53:24.475865  150386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:53:24.555564  150386 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:53:24.555651  150386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:53:24.636251  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:53:24.636331  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:53:24.647535  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:53:24.663493  150386 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:53:24.663534  150386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:53:24.663571  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.673350  150386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:53:24.673417  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.683157  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.692864  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.702168  150386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:53:24.710521  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.719794  150386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.734475  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:53:24.743952  150386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:53:24.751435  150386 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:53:24.751507  150386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:53:24.758780  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:53:24.835644  150386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:53:24.943612  150386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:53:24.943708  150386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:53:24.947392  150386 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:53:24.947415  150386 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:53:24.947421  150386 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0916 10:53:24.947428  150386 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:53:24.947434  150386 command_runner.go:130] > Access: 2024-09-16 10:53:24.926948060 +0000
	I0916 10:53:24.947439  150386 command_runner.go:130] > Modify: 2024-09-16 10:53:24.926948060 +0000
	I0916 10:53:24.947444  150386 command_runner.go:130] > Change: 2024-09-16 10:53:24.926948060 +0000
	I0916 10:53:24.947448  150386 command_runner.go:130] >  Birth: -
	I0916 10:53:24.947468  150386 start.go:563] Will wait 60s for crictl version
	I0916 10:53:24.947505  150386 ssh_runner.go:195] Run: which crictl
	I0916 10:53:24.950865  150386 command_runner.go:130] > /usr/bin/crictl
	I0916 10:53:24.950944  150386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:53:24.983555  150386 command_runner.go:130] > Version:  0.1.0
	I0916 10:53:24.983579  150386 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:53:24.983585  150386 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:53:24.983590  150386 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:53:24.983635  150386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:53:24.983693  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:53:25.018244  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:53:25.018270  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:53:25.018277  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:53:25.018281  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:53:25.018287  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:53:25.018291  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:53:25.018300  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:53:25.018304  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:53:25.018309  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:53:25.018317  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:53:25.018321  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:53:25.018325  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:53:25.018390  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:53:25.050200  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:53:25.050224  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:53:25.050231  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:53:25.050236  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:53:25.050242  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:53:25.050246  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:53:25.050251  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:53:25.050255  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:53:25.050260  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:53:25.050268  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:53:25.050272  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:53:25.050276  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:53:25.054319  150386 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:53:25.055860  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:53:25.072765  150386 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:53:25.076270  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:53:25.086467  150386 kubeadm.go:883] updating cluster {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:53:25.086594  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:53:25.086643  150386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:53:25.147473  150386 command_runner.go:130] > {
	I0916 10:53:25.147502  150386 command_runner.go:130] >   "images": [
	I0916 10:53:25.147515  150386 command_runner.go:130] >     {
	I0916 10:53:25.147528  150386 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:53:25.147537  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147548  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:53:25.147562  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147568  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147579  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:53:25.147589  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:53:25.147596  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147602  150386 command_runner.go:130] >       "size": "87190579",
	I0916 10:53:25.147608  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.147616  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.147627  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147634  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147638  150386 command_runner.go:130] >     },
	I0916 10:53:25.147642  150386 command_runner.go:130] >     {
	I0916 10:53:25.147651  150386 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:53:25.147658  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147664  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:53:25.147670  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147675  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147685  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:53:25.147695  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:53:25.147702  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147711  150386 command_runner.go:130] >       "size": "31470524",
	I0916 10:53:25.147719  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.147723  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.147730  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147734  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147739  150386 command_runner.go:130] >     },
	I0916 10:53:25.147742  150386 command_runner.go:130] >     {
	I0916 10:53:25.147753  150386 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:53:25.147761  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147766  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:53:25.147772  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147779  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147789  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:53:25.147799  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:53:25.147807  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147815  150386 command_runner.go:130] >       "size": "63273227",
	I0916 10:53:25.147820  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.147827  150386 command_runner.go:130] >       "username": "nonroot",
	I0916 10:53:25.147832  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147839  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147844  150386 command_runner.go:130] >     },
	I0916 10:53:25.147850  150386 command_runner.go:130] >     {
	I0916 10:53:25.147857  150386 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:53:25.147863  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147869  150386 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:53:25.147876  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147881  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147890  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:53:25.147903  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:53:25.147910  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147915  150386 command_runner.go:130] >       "size": "149009664",
	I0916 10:53:25.147921  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.147925  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.147930  150386 command_runner.go:130] >       },
	I0916 10:53:25.147936  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.147941  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.147947  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.147951  150386 command_runner.go:130] >     },
	I0916 10:53:25.147955  150386 command_runner.go:130] >     {
	I0916 10:53:25.147962  150386 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:53:25.147968  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.147974  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:53:25.147980  150386 command_runner.go:130] >       ],
	I0916 10:53:25.147984  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.147994  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:53:25.148004  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:53:25.148012  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148019  150386 command_runner.go:130] >       "size": "95237600",
	I0916 10:53:25.148023  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148029  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.148033  150386 command_runner.go:130] >       },
	I0916 10:53:25.148040  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148045  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148053  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148057  150386 command_runner.go:130] >     },
	I0916 10:53:25.148063  150386 command_runner.go:130] >     {
	I0916 10:53:25.148070  150386 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:53:25.148077  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148084  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:53:25.148091  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148096  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148106  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:53:25.148116  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:53:25.148122  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148127  150386 command_runner.go:130] >       "size": "89437508",
	I0916 10:53:25.148134  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148138  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.148144  150386 command_runner.go:130] >       },
	I0916 10:53:25.148148  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148155  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148159  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148165  150386 command_runner.go:130] >     },
	I0916 10:53:25.148169  150386 command_runner.go:130] >     {
	I0916 10:53:25.148176  150386 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:53:25.148182  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148188  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:53:25.148194  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148199  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148208  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:53:25.148217  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:53:25.148224  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148228  150386 command_runner.go:130] >       "size": "92733849",
	I0916 10:53:25.148234  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.148239  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148245  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148250  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148265  150386 command_runner.go:130] >     },
	I0916 10:53:25.148268  150386 command_runner.go:130] >     {
	I0916 10:53:25.148274  150386 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:53:25.148278  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148283  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:53:25.148287  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148290  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148304  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:53:25.148312  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:53:25.148315  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148319  150386 command_runner.go:130] >       "size": "68420934",
	I0916 10:53:25.148323  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148327  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.148330  150386 command_runner.go:130] >       },
	I0916 10:53:25.148334  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148338  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148342  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148349  150386 command_runner.go:130] >     },
	I0916 10:53:25.148353  150386 command_runner.go:130] >     {
	I0916 10:53:25.148362  150386 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:53:25.148369  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.148374  150386 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:53:25.148380  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148385  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.148394  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:53:25.148403  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:53:25.148409  150386 command_runner.go:130] >       ],
	I0916 10:53:25.148414  150386 command_runner.go:130] >       "size": "742080",
	I0916 10:53:25.148420  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.148425  150386 command_runner.go:130] >         "value": "65535"
	I0916 10:53:25.148431  150386 command_runner.go:130] >       },
	I0916 10:53:25.148436  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.148442  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.148448  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.148454  150386 command_runner.go:130] >     }
	I0916 10:53:25.148458  150386 command_runner.go:130] >   ]
	I0916 10:53:25.148464  150386 command_runner.go:130] > }
	I0916 10:53:25.148642  150386 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:53:25.148655  150386 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:53:25.148705  150386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:53:25.181514  150386 command_runner.go:130] > {
	I0916 10:53:25.181541  150386 command_runner.go:130] >   "images": [
	I0916 10:53:25.181546  150386 command_runner.go:130] >     {
	I0916 10:53:25.181558  150386 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:53:25.181564  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.181572  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:53:25.181577  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181582  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.181596  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:53:25.181607  150386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:53:25.181612  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181619  150386 command_runner.go:130] >       "size": "87190579",
	I0916 10:53:25.181626  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.181633  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.181649  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.181660  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.181668  150386 command_runner.go:130] >     },
	I0916 10:53:25.181676  150386 command_runner.go:130] >     {
	I0916 10:53:25.181687  150386 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:53:25.181696  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.181706  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:53:25.181714  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181722  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.181737  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:53:25.181753  150386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:53:25.181761  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181773  150386 command_runner.go:130] >       "size": "31470524",
	I0916 10:53:25.181783  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.181792  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.181799  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.181809  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.181819  150386 command_runner.go:130] >     },
	I0916 10:53:25.181827  150386 command_runner.go:130] >     {
	I0916 10:53:25.181844  150386 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:53:25.181853  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.181863  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:53:25.181872  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181879  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.181894  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:53:25.181909  150386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:53:25.181917  150386 command_runner.go:130] >       ],
	I0916 10:53:25.181924  150386 command_runner.go:130] >       "size": "63273227",
	I0916 10:53:25.181933  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.181941  150386 command_runner.go:130] >       "username": "nonroot",
	I0916 10:53:25.181951  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.181961  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.181967  150386 command_runner.go:130] >     },
	I0916 10:53:25.181974  150386 command_runner.go:130] >     {
	I0916 10:53:25.181984  150386 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:53:25.181993  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182001  150386 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:53:25.182007  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182015  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182027  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:53:25.182046  150386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:53:25.182055  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182061  150386 command_runner.go:130] >       "size": "149009664",
	I0916 10:53:25.182070  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182078  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182086  150386 command_runner.go:130] >       },
	I0916 10:53:25.182095  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182104  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182113  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182121  150386 command_runner.go:130] >     },
	I0916 10:53:25.182132  150386 command_runner.go:130] >     {
	I0916 10:53:25.182145  150386 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:53:25.182152  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182164  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:53:25.182173  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182183  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182198  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:53:25.182215  150386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:53:25.182223  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182230  150386 command_runner.go:130] >       "size": "95237600",
	I0916 10:53:25.182237  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182246  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182255  150386 command_runner.go:130] >       },
	I0916 10:53:25.182262  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182271  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182279  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182287  150386 command_runner.go:130] >     },
	I0916 10:53:25.182294  150386 command_runner.go:130] >     {
	I0916 10:53:25.182308  150386 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:53:25.182317  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182327  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:53:25.182336  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182343  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182359  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:53:25.182375  150386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:53:25.182383  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182389  150386 command_runner.go:130] >       "size": "89437508",
	I0916 10:53:25.182398  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182406  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182414  150386 command_runner.go:130] >       },
	I0916 10:53:25.182421  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182430  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182437  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182445  150386 command_runner.go:130] >     },
	I0916 10:53:25.182451  150386 command_runner.go:130] >     {
	I0916 10:53:25.182463  150386 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:53:25.182472  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182480  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:53:25.182489  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182497  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182512  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:53:25.182526  150386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:53:25.182534  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182541  150386 command_runner.go:130] >       "size": "92733849",
	I0916 10:53:25.182551  150386 command_runner.go:130] >       "uid": null,
	I0916 10:53:25.182560  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182571  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182580  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182586  150386 command_runner.go:130] >     },
	I0916 10:53:25.182594  150386 command_runner.go:130] >     {
	I0916 10:53:25.182603  150386 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:53:25.182609  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182617  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:53:25.182626  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182633  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182656  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:53:25.182670  150386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:53:25.182676  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182683  150386 command_runner.go:130] >       "size": "68420934",
	I0916 10:53:25.182690  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182700  150386 command_runner.go:130] >         "value": "0"
	I0916 10:53:25.182708  150386 command_runner.go:130] >       },
	I0916 10:53:25.182715  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182723  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182733  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182740  150386 command_runner.go:130] >     },
	I0916 10:53:25.182750  150386 command_runner.go:130] >     {
	I0916 10:53:25.182764  150386 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:53:25.182774  150386 command_runner.go:130] >       "repoTags": [
	I0916 10:53:25.182784  150386 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:53:25.182792  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182800  150386 command_runner.go:130] >       "repoDigests": [
	I0916 10:53:25.182813  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:53:25.182828  150386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:53:25.182836  150386 command_runner.go:130] >       ],
	I0916 10:53:25.182855  150386 command_runner.go:130] >       "size": "742080",
	I0916 10:53:25.182866  150386 command_runner.go:130] >       "uid": {
	I0916 10:53:25.182875  150386 command_runner.go:130] >         "value": "65535"
	I0916 10:53:25.182882  150386 command_runner.go:130] >       },
	I0916 10:53:25.182889  150386 command_runner.go:130] >       "username": "",
	I0916 10:53:25.182900  150386 command_runner.go:130] >       "spec": null,
	I0916 10:53:25.182910  150386 command_runner.go:130] >       "pinned": false
	I0916 10:53:25.182917  150386 command_runner.go:130] >     }
	I0916 10:53:25.182925  150386 command_runner.go:130] >   ]
	I0916 10:53:25.182933  150386 command_runner.go:130] > }
	I0916 10:53:25.183047  150386 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:53:25.183060  150386 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:53:25.183070  150386 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 crio true true} ...
	I0916 10:53:25.183176  150386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:53:25.183254  150386 ssh_runner.go:195] Run: crio config
	I0916 10:53:25.220901  150386 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:53:25.220935  150386 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:53:25.220945  150386 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:53:25.220950  150386 command_runner.go:130] > #
	I0916 10:53:25.220958  150386 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:53:25.220966  150386 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:53:25.220975  150386 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:53:25.220986  150386 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:53:25.221000  150386 command_runner.go:130] > # reload'.
	I0916 10:53:25.221014  150386 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:53:25.221029  150386 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:53:25.221043  150386 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:53:25.221058  150386 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:53:25.221068  150386 command_runner.go:130] > [crio]
	I0916 10:53:25.221081  150386 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:53:25.221093  150386 command_runner.go:130] > # containers images, in this directory.
	I0916 10:53:25.221125  150386 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0916 10:53:25.221141  150386 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:53:25.221153  150386 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0916 10:53:25.221168  150386 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:53:25.221182  150386 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:53:25.221194  150386 command_runner.go:130] > # storage_driver = "vfs"
	I0916 10:53:25.221203  150386 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:53:25.221213  150386 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:53:25.221223  150386 command_runner.go:130] > # storage_option = [
	I0916 10:53:25.221230  150386 command_runner.go:130] > # ]
	I0916 10:53:25.221244  150386 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:53:25.221258  150386 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:53:25.221270  150386 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:53:25.221284  150386 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:53:25.221298  150386 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:53:25.221310  150386 command_runner.go:130] > # always happen on a node reboot
	I0916 10:53:25.221322  150386 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:53:25.221357  150386 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:53:25.221379  150386 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:53:25.221392  150386 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:53:25.221399  150386 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0916 10:53:25.221413  150386 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:53:25.221428  150386 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:53:25.221438  150386 command_runner.go:130] > # internal_wipe = true
	I0916 10:53:25.221448  150386 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:53:25.221461  150386 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:53:25.221477  150386 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:53:25.221494  150386 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:53:25.221505  150386 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:53:25.221511  150386 command_runner.go:130] > [crio.api]
	I0916 10:53:25.221520  150386 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:53:25.221532  150386 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:53:25.221545  150386 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:53:25.221554  150386 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:53:25.221569  150386 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:53:25.221586  150386 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:53:25.221598  150386 command_runner.go:130] > # stream_port = "0"
	I0916 10:53:25.221613  150386 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:53:25.221624  150386 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:53:25.221634  150386 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:53:25.221646  150386 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:53:25.221656  150386 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:53:25.221671  150386 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:53:25.221677  150386 command_runner.go:130] > # minutes.
	I0916 10:53:25.221685  150386 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:53:25.221699  150386 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:53:25.221712  150386 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:53:25.221721  150386 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:53:25.221730  150386 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:53:25.221741  150386 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:53:25.221751  150386 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:53:25.221761  150386 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:53:25.221774  150386 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:53:25.221786  150386 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0916 10:53:25.221801  150386 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:53:25.221810  150386 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0916 10:53:25.221838  150386 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:53:25.221853  150386 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:53:25.221859  150386 command_runner.go:130] > [crio.runtime]
	I0916 10:53:25.221872  150386 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:53:25.221884  150386 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:53:25.221891  150386 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:53:25.221902  150386 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:53:25.221912  150386 command_runner.go:130] > # default_ulimits = [
	I0916 10:53:25.221918  150386 command_runner.go:130] > # ]
	I0916 10:53:25.221932  150386 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:53:25.221940  150386 command_runner.go:130] > # no_pivot = false
	I0916 10:53:25.221952  150386 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:53:25.221964  150386 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:53:25.221976  150386 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:53:25.221986  150386 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:53:25.221994  150386 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:53:25.222008  150386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:53:25.222018  150386 command_runner.go:130] > # conmon = ""
	I0916 10:53:25.222025  150386 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:53:25.222044  150386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:53:25.222055  150386 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:53:25.222066  150386 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:53:25.222075  150386 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:53:25.222086  150386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:53:25.222098  150386 command_runner.go:130] > # conmon_env = [
	I0916 10:53:25.222104  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222116  150386 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:53:25.222123  150386 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:53:25.222132  150386 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:53:25.222138  150386 command_runner.go:130] > # default_env = [
	I0916 10:53:25.222143  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222158  150386 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:53:25.222165  150386 command_runner.go:130] > # selinux = false
	I0916 10:53:25.222177  150386 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:53:25.222190  150386 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:53:25.222201  150386 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:53:25.222213  150386 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:53:25.222226  150386 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:53:25.222238  150386 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:53:25.222250  150386 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:53:25.222261  150386 command_runner.go:130] > # which might increase security.
	I0916 10:53:25.222272  150386 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0916 10:53:25.222285  150386 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:53:25.222297  150386 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:53:25.222310  150386 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:53:25.222323  150386 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:53:25.222334  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.222346  150386 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:53:25.222358  150386 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:53:25.222368  150386 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:53:25.222378  150386 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:53:25.222388  150386 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:53:25.222398  150386 command_runner.go:130] > # irqbalance daemon.
	I0916 10:53:25.222409  150386 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:53:25.222422  150386 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:53:25.222433  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.222442  150386 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:53:25.222458  150386 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:53:25.222467  150386 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:53:25.222477  150386 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:53:25.222487  150386 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:53:25.222499  150386 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:53:25.222513  150386 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:53:25.222521  150386 command_runner.go:130] > # will be added.
	I0916 10:53:25.222532  150386 command_runner.go:130] > # default_capabilities = [
	I0916 10:53:25.222542  150386 command_runner.go:130] > # 	"CHOWN",
	I0916 10:53:25.222551  150386 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:53:25.222558  150386 command_runner.go:130] > # 	"FSETID",
	I0916 10:53:25.222567  150386 command_runner.go:130] > # 	"FOWNER",
	I0916 10:53:25.222606  150386 command_runner.go:130] > # 	"SETGID",
	I0916 10:53:25.222615  150386 command_runner.go:130] > # 	"SETUID",
	I0916 10:53:25.222624  150386 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:53:25.222632  150386 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:53:25.222640  150386 command_runner.go:130] > # 	"KILL",
	I0916 10:53:25.222649  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222661  150386 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:53:25.222675  150386 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:53:25.222686  150386 command_runner.go:130] > # add_inheritable_capabilities = true
	I0916 10:53:25.222698  150386 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:53:25.222711  150386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:53:25.222719  150386 command_runner.go:130] > default_sysctls = [
	I0916 10:53:25.222729  150386 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:53:25.222737  150386 command_runner.go:130] > ]
	I0916 10:53:25.222745  150386 command_runner.go:130] > # List of devices on the host that a
	I0916 10:53:25.222756  150386 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:53:25.222764  150386 command_runner.go:130] > # allowed_devices = [
	I0916 10:53:25.222772  150386 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:53:25.222778  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222785  150386 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:53:25.222819  150386 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:53:25.222829  150386 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:53:25.222840  150386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:53:25.222849  150386 command_runner.go:130] > # additional_devices = [
	I0916 10:53:25.222857  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222864  150386 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:53:25.222872  150386 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:53:25.222880  150386 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:53:25.222889  150386 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:53:25.222897  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222906  150386 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:53:25.222918  150386 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:53:25.222931  150386 command_runner.go:130] > # Defaults to false.
	I0916 10:53:25.222942  150386 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:53:25.222951  150386 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:53:25.222963  150386 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:53:25.222971  150386 command_runner.go:130] > # hooks_dir = [
	I0916 10:53:25.222981  150386 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:53:25.222988  150386 command_runner.go:130] > # ]
	I0916 10:53:25.222997  150386 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:53:25.223009  150386 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:53:25.223019  150386 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:53:25.223026  150386 command_runner.go:130] > #
	I0916 10:53:25.223035  150386 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:53:25.223047  150386 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:53:25.223058  150386 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:53:25.223066  150386 command_runner.go:130] > #
	I0916 10:53:25.223078  150386 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:53:25.223090  150386 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:53:25.223103  150386 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:53:25.223113  150386 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:53:25.223118  150386 command_runner.go:130] > #
	I0916 10:53:25.223127  150386 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:53:25.223135  150386 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:53:25.223149  150386 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:53:25.223159  150386 command_runner.go:130] > # pids_limit = 0
	I0916 10:53:25.223172  150386 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:53:25.223184  150386 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:53:25.223196  150386 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:53:25.223211  150386 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:53:25.223221  150386 command_runner.go:130] > # log_size_max = -1
	I0916 10:53:25.223236  150386 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0916 10:53:25.223245  150386 command_runner.go:130] > # log_to_journald = false
	I0916 10:53:25.223258  150386 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:53:25.223268  150386 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:53:25.223280  150386 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:53:25.223291  150386 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:53:25.223303  150386 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:53:25.223312  150386 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:53:25.223322  150386 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:53:25.223334  150386 command_runner.go:130] > # read_only = false
	I0916 10:53:25.223346  150386 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:53:25.223359  150386 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:53:25.223368  150386 command_runner.go:130] > # live configuration reload.
	I0916 10:53:25.223374  150386 command_runner.go:130] > # log_level = "info"
	I0916 10:53:25.223384  150386 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:53:25.223395  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.223404  150386 command_runner.go:130] > # log_filter = ""
	I0916 10:53:25.223415  150386 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:53:25.223428  150386 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:53:25.223437  150386 command_runner.go:130] > # separated by comma.
	I0916 10:53:25.223447  150386 command_runner.go:130] > # uid_mappings = ""
	I0916 10:53:25.223458  150386 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:53:25.223470  150386 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:53:25.223479  150386 command_runner.go:130] > # separated by comma.
	I0916 10:53:25.223488  150386 command_runner.go:130] > # gid_mappings = ""
	I0916 10:53:25.223501  150386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:53:25.223513  150386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:53:25.223524  150386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:53:25.223534  150386 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:53:25.223545  150386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:53:25.223556  150386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:53:25.223568  150386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:53:25.223583  150386 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:53:25.223594  150386 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:53:25.223603  150386 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:53:25.223616  150386 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:53:25.223626  150386 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:53:25.223641  150386 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:53:25.223658  150386 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:53:25.223668  150386 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:53:25.223681  150386 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:53:25.223691  150386 command_runner.go:130] > # drop_infra_ctr = true
	I0916 10:53:25.223704  150386 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:53:25.223715  150386 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:53:25.223728  150386 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:53:25.223738  150386 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:53:25.223749  150386 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:53:25.223763  150386 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:53:25.223772  150386 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:53:25.223783  150386 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:53:25.223792  150386 command_runner.go:130] > # pinns_path = ""
	I0916 10:53:25.223803  150386 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:53:25.223815  150386 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0916 10:53:25.223827  150386 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0916 10:53:25.223834  150386 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:53:25.223839  150386 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:53:25.223849  150386 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:53:25.223861  150386 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0916 10:53:25.223868  150386 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:53:25.223875  150386 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:53:25.223882  150386 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:53:25.223887  150386 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:53:25.223893  150386 command_runner.go:130] > # ]
	I0916 10:53:25.223899  150386 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:53:25.223907  150386 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:53:25.223916  150386 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0916 10:53:25.223923  150386 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0916 10:53:25.223928  150386 command_runner.go:130] > #
	I0916 10:53:25.223933  150386 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0916 10:53:25.223940  150386 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0916 10:53:25.223944  150386 command_runner.go:130] > #  runtime_type = "oci"
	I0916 10:53:25.223949  150386 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0916 10:53:25.223957  150386 command_runner.go:130] > #  privileged_without_host_devices = false
	I0916 10:53:25.223961  150386 command_runner.go:130] > #  allowed_annotations = []
	I0916 10:53:25.223965  150386 command_runner.go:130] > # Where:
	I0916 10:53:25.223970  150386 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0916 10:53:25.223978  150386 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0916 10:53:25.223987  150386 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:53:25.223993  150386 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:53:25.223999  150386 command_runner.go:130] > #   in $PATH.
	I0916 10:53:25.224006  150386 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0916 10:53:25.224013  150386 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:53:25.224019  150386 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0916 10:53:25.224027  150386 command_runner.go:130] > #   state.
	I0916 10:53:25.224034  150386 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:53:25.224042  150386 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:53:25.224048  150386 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:53:25.224056  150386 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:53:25.224062  150386 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:53:25.224070  150386 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:53:25.224074  150386 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:53:25.224083  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:53:25.224092  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:53:25.224102  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:53:25.224110  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:53:25.224119  150386 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:53:25.224128  150386 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:53:25.224134  150386 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:53:25.224142  150386 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0916 10:53:25.224147  150386 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:53:25.224154  150386 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:53:25.224158  150386 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0916 10:53:25.224164  150386 command_runner.go:130] > runtime_type = "oci"
	I0916 10:53:25.224169  150386 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:53:25.224175  150386 command_runner.go:130] > runtime_config_path = ""
	I0916 10:53:25.224179  150386 command_runner.go:130] > monitor_path = ""
	I0916 10:53:25.224185  150386 command_runner.go:130] > monitor_cgroup = ""
	I0916 10:53:25.224190  150386 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:53:25.224220  150386 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0916 10:53:25.224226  150386 command_runner.go:130] > # running containers
	I0916 10:53:25.224230  150386 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0916 10:53:25.224235  150386 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0916 10:53:25.224244  150386 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0916 10:53:25.224250  150386 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0916 10:53:25.224258  150386 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0916 10:53:25.224263  150386 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0916 10:53:25.224268  150386 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0916 10:53:25.224272  150386 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0916 10:53:25.224279  150386 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0916 10:53:25.224283  150386 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0916 10:53:25.224293  150386 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:53:25.224300  150386 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:53:25.224307  150386 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:53:25.224316  150386 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:53:25.224323  150386 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:53:25.224331  150386 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:53:25.224340  150386 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:53:25.224350  150386 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:53:25.224355  150386 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:53:25.224364  150386 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:53:25.224367  150386 command_runner.go:130] > # Example:
	I0916 10:53:25.224372  150386 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:53:25.224379  150386 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:53:25.224384  150386 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:53:25.224393  150386 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:53:25.224396  150386 command_runner.go:130] > # cpuset = 0
	I0916 10:53:25.224400  150386 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:53:25.224405  150386 command_runner.go:130] > # Where:
	I0916 10:53:25.224411  150386 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:53:25.224419  150386 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:53:25.224426  150386 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:53:25.224432  150386 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:53:25.224442  150386 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:53:25.224447  150386 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:53:25.224451  150386 command_runner.go:130] > # 
	I0916 10:53:25.224457  150386 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:53:25.224463  150386 command_runner.go:130] > #
	I0916 10:53:25.224469  150386 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:53:25.224477  150386 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:53:25.224483  150386 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:53:25.224491  150386 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:53:25.224497  150386 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:53:25.224504  150386 command_runner.go:130] > [crio.image]
	I0916 10:53:25.224510  150386 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:53:25.224518  150386 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:53:25.224524  150386 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:53:25.224532  150386 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:53:25.224536  150386 command_runner.go:130] > # global_auth_file = ""
	I0916 10:53:25.224543  150386 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:53:25.224548  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.224554  150386 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:53:25.224560  150386 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:53:25.224568  150386 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:53:25.224577  150386 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:53:25.224584  150386 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:53:25.224589  150386 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:53:25.224597  150386 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:53:25.224603  150386 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:53:25.224611  150386 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:53:25.224615  150386 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:53:25.224623  150386 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:53:25.224631  150386 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:53:25.224640  150386 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:53:25.224648  150386 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:53:25.224653  150386 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:53:25.224658  150386 command_runner.go:130] > # signature_policy = ""
	I0916 10:53:25.224667  150386 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:53:25.224675  150386 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:53:25.224679  150386 command_runner.go:130] > # changing them here.
	I0916 10:53:25.224685  150386 command_runner.go:130] > # insecure_registries = [
	I0916 10:53:25.224689  150386 command_runner.go:130] > # ]
	I0916 10:53:25.224695  150386 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:53:25.224702  150386 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:53:25.224707  150386 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:53:25.224714  150386 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:53:25.224718  150386 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:53:25.224723  150386 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:53:25.224727  150386 command_runner.go:130] > # CNI plugins.
	I0916 10:53:25.224731  150386 command_runner.go:130] > [crio.network]
	I0916 10:53:25.224739  150386 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:53:25.224744  150386 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:53:25.224748  150386 command_runner.go:130] > # cni_default_network = ""
	I0916 10:53:25.224756  150386 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:53:25.224762  150386 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:53:25.224767  150386 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:53:25.224773  150386 command_runner.go:130] > # plugin_dirs = [
	I0916 10:53:25.224777  150386 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:53:25.224782  150386 command_runner.go:130] > # ]
	I0916 10:53:25.224788  150386 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:53:25.224791  150386 command_runner.go:130] > [crio.metrics]
	I0916 10:53:25.224796  150386 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:53:25.224801  150386 command_runner.go:130] > # enable_metrics = false
	I0916 10:53:25.224805  150386 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:53:25.224810  150386 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:53:25.224819  150386 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:53:25.224826  150386 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:53:25.224834  150386 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:53:25.224838  150386 command_runner.go:130] > # metrics_collectors = [
	I0916 10:53:25.224843  150386 command_runner.go:130] > # 	"operations",
	I0916 10:53:25.224847  150386 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:53:25.224852  150386 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:53:25.224858  150386 command_runner.go:130] > # 	"operations_errors",
	I0916 10:53:25.224862  150386 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:53:25.224867  150386 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:53:25.224871  150386 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:53:25.224875  150386 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:53:25.224879  150386 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:53:25.224883  150386 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:53:25.224887  150386 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:53:25.224891  150386 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:53:25.224895  150386 command_runner.go:130] > # 	"containers_oom",
	I0916 10:53:25.224901  150386 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:53:25.224905  150386 command_runner.go:130] > # 	"operations_total",
	I0916 10:53:25.224911  150386 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:53:25.224915  150386 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:53:25.224920  150386 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:53:25.224924  150386 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:53:25.224928  150386 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:53:25.224934  150386 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:53:25.224939  150386 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:53:25.224945  150386 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:53:25.224949  150386 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:53:25.224952  150386 command_runner.go:130] > # ]
	I0916 10:53:25.224957  150386 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:53:25.224961  150386 command_runner.go:130] > # metrics_port = 9090
	I0916 10:53:25.224966  150386 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:53:25.224970  150386 command_runner.go:130] > # metrics_socket = ""
	I0916 10:53:25.224977  150386 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:53:25.224985  150386 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:53:25.224993  150386 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:53:25.224997  150386 command_runner.go:130] > # certificate on any modification event.
	I0916 10:53:25.225002  150386 command_runner.go:130] > # metrics_cert = ""
	I0916 10:53:25.225007  150386 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:53:25.225015  150386 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:53:25.225018  150386 command_runner.go:130] > # metrics_key = ""
	I0916 10:53:25.225029  150386 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:53:25.225035  150386 command_runner.go:130] > [crio.tracing]
	I0916 10:53:25.225039  150386 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:53:25.225044  150386 command_runner.go:130] > # enable_tracing = false
	I0916 10:53:25.225048  150386 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:53:25.225052  150386 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:53:25.225056  150386 command_runner.go:130] > # Number of samples to collect per million spans.
	I0916 10:53:25.225060  150386 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:53:25.225066  150386 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:53:25.225069  150386 command_runner.go:130] > [crio.stats]
	I0916 10:53:25.225075  150386 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:53:25.225080  150386 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:53:25.225084  150386 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:53:25.225116  150386 command_runner.go:130] ! time="2024-09-16 10:53:25.218754149Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0916 10:53:25.225127  150386 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:53:25.225207  150386 cni.go:84] Creating CNI manager for ""
	I0916 10:53:25.225212  150386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:53:25.225220  150386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:53:25.225240  150386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-026168 NodeName:multinode-026168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:53:25.225402  150386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-026168"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:53:25.225480  150386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:53:25.233837  150386 command_runner.go:130] > kubeadm
	I0916 10:53:25.233861  150386 command_runner.go:130] > kubectl
	I0916 10:53:25.233867  150386 command_runner.go:130] > kubelet
	I0916 10:53:25.233893  150386 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:53:25.233945  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:53:25.241883  150386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0916 10:53:25.258665  150386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:53:25.275460  150386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:53:25.291931  150386 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:53:25.295165  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:53:25.305057  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:53:25.378997  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:53:25.391812  150386 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.2
	I0916 10:53:25.391836  150386 certs.go:194] generating shared ca certs ...
	I0916 10:53:25.391854  150386 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.392006  150386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:53:25.392059  150386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:53:25.392083  150386 certs.go:256] generating profile certs ...
	I0916 10:53:25.392154  150386 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key
	I0916 10:53:25.392179  150386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt with IP's: []
	I0916 10:53:25.481640  150386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt ...
	I0916 10:53:25.481678  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt: {Name:mk9bd3c2540afe41a9b495b48558c06f33cad4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.481875  150386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key ...
	I0916 10:53:25.481890  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key: {Name:mkc369c04f3bf5390d2f7aaeb26ec87bc68b4e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.482002  150386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66
	I0916 10:53:25.482030  150386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0916 10:53:25.775934  150386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66 ...
	I0916 10:53:25.775971  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66: {Name:mk3be0689653695bd78826696ae2b5515df82105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.776191  150386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66 ...
	I0916 10:53:25.776209  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66: {Name:mk742343203e36bcee65f9aa431aa427c1eb2e9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.776305  150386 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt.d8814b66 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt
	I0916 10:53:25.776417  150386 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key
	I0916 10:53:25.776503  150386 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key
	I0916 10:53:25.776525  150386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt with IP's: []
	I0916 10:53:25.956310  150386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt ...
	I0916 10:53:25.956349  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt: {Name:mkda10595286654079142e1eff4429efbace9338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.956551  150386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key ...
	I0916 10:53:25.956576  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key: {Name:mkc963296c8321762a9d334c4bc71418f9425823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:25.956695  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:53:25.956719  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:53:25.956734  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:53:25.956750  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:53:25.956769  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:53:25.956789  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:53:25.956808  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:53:25.956826  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:53:25.956893  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:53:25.956939  150386 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:53:25.956952  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:53:25.956984  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:53:25.957018  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:53:25.957050  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:53:25.957106  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:53:25.957152  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:53:25.957174  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:25.957192  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:53:25.957794  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:53:25.981746  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:53:26.004628  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:53:26.027194  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:53:26.049678  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:53:26.072111  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:53:26.093871  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:53:26.116795  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:53:26.138967  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:53:26.161181  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:53:26.183991  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:53:26.207456  150386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:53:26.224158  150386 ssh_runner.go:195] Run: openssl version
	I0916 10:53:26.229088  150386 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:53:26.229252  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:53:26.237954  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.241388  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.241420  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.241469  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:53:26.248290  150386 command_runner.go:130] > 3ec20f2e
	I0916 10:53:26.248448  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:53:26.257725  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:53:26.266765  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.270336  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.270384  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.270438  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:53:26.277654  150386 command_runner.go:130] > b5213941
	I0916 10:53:26.277728  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:53:26.287565  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:53:26.297016  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.300770  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.300829  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.300872  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:53:26.307375  150386 command_runner.go:130] > 51391683
	I0916 10:53:26.307459  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:53:26.316661  150386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:53:26.320055  150386 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:53:26.320103  150386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:53:26.320144  150386 kubeadm.go:392] StartCluster: {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:26.320226  150386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:53:26.320275  150386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:53:26.355171  150386 cri.go:89] found id: ""
	I0916 10:53:26.355249  150386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:53:26.363356  150386 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0916 10:53:26.363387  150386 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0916 10:53:26.363396  150386 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0916 10:53:26.364086  150386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:53:26.372625  150386 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:53:26.372684  150386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:53:26.381125  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0916 10:53:26.381156  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0916 10:53:26.381169  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0916 10:53:26.381181  150386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:53:26.381221  150386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:53:26.381236  150386 kubeadm.go:157] found existing configuration files:
	
	I0916 10:53:26.381286  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:53:26.389970  150386 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:53:26.390026  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:53:26.390078  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:53:26.398312  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:53:26.406493  150386 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:53:26.406549  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:53:26.406610  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:53:26.414878  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:53:26.423137  150386 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:53:26.423193  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:53:26.423244  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:53:26.431078  150386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:53:26.439247  150386 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:53:26.439298  150386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:53:26.439345  150386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:53:26.447717  150386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:53:26.484433  150386 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:53:26.484477  150386 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I0916 10:53:26.484545  150386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:53:26.484555  150386 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:53:26.501068  150386 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:53:26.501100  150386 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:53:26.501168  150386 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:53:26.501193  150386 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:53:26.501262  150386 kubeadm.go:310] OS: Linux
	I0916 10:53:26.501274  150386 command_runner.go:130] > OS: Linux
	I0916 10:53:26.501374  150386 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:53:26.501395  150386 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:53:26.501456  150386 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:53:26.501467  150386 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:53:26.501527  150386 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:53:26.501537  150386 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:53:26.501630  150386 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:53:26.501642  150386 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:53:26.501719  150386 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:53:26.501737  150386 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:53:26.501817  150386 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:53:26.501829  150386 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:53:26.501881  150386 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:53:26.501894  150386 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:53:26.501965  150386 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:53:26.501981  150386 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:53:26.502049  150386 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:53:26.502060  150386 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:53:26.554639  150386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:53:26.554653  150386 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:53:26.554815  150386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:53:26.554832  150386 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:53:26.554962  150386 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:53:26.554974  150386 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:53:26.560763  150386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:53:26.560850  150386 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:53:26.563081  150386 out.go:235]   - Generating certificates and keys ...
	I0916 10:53:26.563189  150386 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0916 10:53:26.563204  150386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:53:26.563300  150386 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0916 10:53:26.563323  150386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:53:26.661612  150386 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:53:26.661640  150386 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:53:26.919823  150386 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:53:26.919861  150386 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:53:27.005190  150386 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:53:27.005221  150386 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0916 10:53:27.226400  150386 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:53:27.226457  150386 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0916 10:53:27.315950  150386 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:53:27.315981  150386 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0916 10:53:27.316132  150386 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.316150  150386 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.612384  150386 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:53:27.612414  150386 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0916 10:53:27.612550  150386 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.612565  150386 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-026168] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:53:27.657432  150386 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:53:27.657466  150386 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:53:27.721218  150386 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:53:27.721247  150386 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:53:27.829857  150386 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:53:27.829877  150386 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0916 10:53:27.829978  150386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:53:27.829994  150386 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:53:27.901836  150386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:53:27.901863  150386 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:53:27.990782  150386 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:53:27.990806  150386 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:53:28.066565  150386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:53:28.066591  150386 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:53:28.286602  150386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:53:28.286635  150386 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:53:28.531261  150386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:53:28.531288  150386 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:53:28.532046  150386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:53:28.532067  150386 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:53:28.536520  150386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:53:28.536616  150386 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:53:28.539053  150386 out.go:235]   - Booting up control plane ...
	I0916 10:53:28.539193  150386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:53:28.539243  150386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:53:28.539365  150386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:53:28.539381  150386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:53:28.539976  150386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:53:28.539996  150386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:53:28.552676  150386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:53:28.552704  150386 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:53:28.558424  150386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:53:28.558455  150386 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:53:28.558497  150386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:53:28.558505  150386 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:53:28.640263  150386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:53:28.640300  150386 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:53:28.640435  150386 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:53:28.640447  150386 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:53:29.141777  150386 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.648489ms
	I0916 10:53:29.141809  150386 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.648489ms
	I0916 10:53:29.141898  150386 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:53:29.141922  150386 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:53:33.643743  150386 kubeadm.go:310] [api-check] The API server is healthy after 4.501974554s
	I0916 10:53:33.643773  150386 command_runner.go:130] > [api-check] The API server is healthy after 4.501974554s
	I0916 10:53:33.655458  150386 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:53:33.655490  150386 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:53:33.666692  150386 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:53:33.666702  150386 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:53:33.685168  150386 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:53:33.685197  150386 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:53:33.685391  150386 kubeadm.go:310] [mark-control-plane] Marking the node multinode-026168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:53:33.685401  150386 command_runner.go:130] > [mark-control-plane] Marking the node multinode-026168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:53:33.694893  150386 kubeadm.go:310] [bootstrap-token] Using token: t01fub.r49yz7owz29vmht5
	I0916 10:53:33.694919  150386 command_runner.go:130] > [bootstrap-token] Using token: t01fub.r49yz7owz29vmht5
	I0916 10:53:33.696702  150386 out.go:235]   - Configuring RBAC rules ...
	I0916 10:53:33.696831  150386 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:53:33.696848  150386 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:53:33.699750  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:53:33.699774  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:53:33.705469  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:53:33.705490  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:53:33.707965  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:53:33.707976  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:53:33.710360  150386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:53:33.710376  150386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:53:33.713861  150386 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:53:33.713878  150386 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:53:34.049692  150386 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:53:34.049712  150386 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:53:34.471244  150386 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:53:34.471270  150386 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0916 10:53:35.050638  150386 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:53:35.050661  150386 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0916 10:53:35.051407  150386 kubeadm.go:310] 
	I0916 10:53:35.051508  150386 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:53:35.051524  150386 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0916 10:53:35.051532  150386 kubeadm.go:310] 
	I0916 10:53:35.051671  150386 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:53:35.051683  150386 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0916 10:53:35.051689  150386 kubeadm.go:310] 
	I0916 10:53:35.051725  150386 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:53:35.051737  150386 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0916 10:53:35.051823  150386 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:53:35.051844  150386 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:53:35.051950  150386 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:53:35.051963  150386 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:53:35.051974  150386 kubeadm.go:310] 
	I0916 10:53:35.052068  150386 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:53:35.052083  150386 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0916 10:53:35.052090  150386 kubeadm.go:310] 
	I0916 10:53:35.052155  150386 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:53:35.052167  150386 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:53:35.052172  150386 kubeadm.go:310] 
	I0916 10:53:35.052241  150386 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:53:35.052252  150386 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0916 10:53:35.052358  150386 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:53:35.052368  150386 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:53:35.052472  150386 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:53:35.052477  150386 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:53:35.052487  150386 kubeadm.go:310] 
	I0916 10:53:35.052580  150386 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:53:35.052588  150386 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:53:35.052651  150386 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:53:35.052658  150386 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0916 10:53:35.052662  150386 kubeadm.go:310] 
	I0916 10:53:35.052761  150386 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.052769  150386 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.052863  150386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:53:35.052871  150386 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 10:53:35.052888  150386 kubeadm.go:310] 	--control-plane 
	I0916 10:53:35.052892  150386 command_runner.go:130] > 	--control-plane 
	I0916 10:53:35.052899  150386 kubeadm.go:310] 
	I0916 10:53:35.053025  150386 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:53:35.053049  150386 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:53:35.053056  150386 kubeadm.go:310] 
	I0916 10:53:35.053177  150386 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.053198  150386 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token t01fub.r49yz7owz29vmht5 \
	I0916 10:53:35.053370  150386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:53:35.053384  150386 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:53:35.056111  150386 kubeadm.go:310] W0916 10:53:26.481933    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056135  150386 command_runner.go:130] ! W0916 10:53:26.481933    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056467  150386 kubeadm.go:310] W0916 10:53:26.482537    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056481  150386 command_runner.go:130] ! W0916 10:53:26.482537    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:53:35.056823  150386 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:53:35.056851  150386 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:53:35.056961  150386 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:53:35.056988  150386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:53:35.057009  150386 cni.go:84] Creating CNI manager for ""
	I0916 10:53:35.057019  150386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:53:35.060028  150386 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:53:35.061344  150386 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:53:35.065587  150386 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0916 10:53:35.065616  150386 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0916 10:53:35.065627  150386 command_runner.go:130] > Device: 37h/55d	Inode: 544182      Links: 1
	I0916 10:53:35.065638  150386 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:53:35.065653  150386 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:53:35.065663  150386 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:53:35.065676  150386 command_runner.go:130] > Change: 2024-09-16 10:23:14.433787463 +0000
	I0916 10:53:35.065688  150386 command_runner.go:130] >  Birth: 2024-09-16 10:23:14.405785404 +0000
	I0916 10:53:35.065743  150386 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:53:35.065754  150386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:53:35.083563  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:53:35.259319  150386 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0916 10:53:35.264698  150386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0916 10:53:35.272503  150386 command_runner.go:130] > serviceaccount/kindnet created
	I0916 10:53:35.280912  150386 command_runner.go:130] > daemonset.apps/kindnet created
	I0916 10:53:35.284864  150386 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:53:35.284950  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:35.284980  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-026168 minikube.k8s.io/updated_at=2024_09_16T10_53_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-026168 minikube.k8s.io/primary=true
	I0916 10:53:35.291922  150386 command_runner.go:130] > -16
	I0916 10:53:35.291986  150386 ops.go:34] apiserver oom_adj: -16
	I0916 10:53:35.362511  150386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0916 10:53:35.362592  150386 command_runner.go:130] > node/multinode-026168 labeled
	I0916 10:53:35.362632  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:35.594017  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:35.863489  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:35.929347  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:36.363344  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:36.429937  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:36.863599  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:36.924251  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:37.363434  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:37.428045  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:37.863745  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:37.932230  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:38.362825  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:38.425127  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:38.863525  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:38.925423  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:39.362768  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:39.424290  150386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:53:39.863515  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:53:39.997203  150386 command_runner.go:130] > NAME      SECRETS   AGE
	I0916 10:53:39.997228  150386 command_runner.go:130] > default   0         0s
	I0916 10:53:40.000074  150386 kubeadm.go:1113] duration metric: took 4.715184212s to wait for elevateKubeSystemPrivileges
	I0916 10:53:40.000117  150386 kubeadm.go:394] duration metric: took 13.679975724s to StartCluster
	I0916 10:53:40.000141  150386 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:40.000222  150386 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.000897  150386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:40.001115  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:53:40.001134  150386 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:53:40.001191  150386 addons.go:69] Setting storage-provisioner=true in profile "multinode-026168"
	I0916 10:53:40.001113  150386 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:53:40.001210  150386 addons.go:234] Setting addon storage-provisioner=true in "multinode-026168"
	I0916 10:53:40.001230  150386 addons.go:69] Setting default-storageclass=true in profile "multinode-026168"
	I0916 10:53:40.001310  150386 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-026168"
	I0916 10:53:40.001359  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:53:40.001241  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:53:40.001708  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:40.001829  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:40.004500  150386 out.go:177] * Verifying Kubernetes components...
	I0916 10:53:40.006313  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:53:40.024797  150386 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:53:40.026334  150386 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:53:40.026353  150386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:53:40.026414  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:40.031105  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.031422  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:53:40.032571  150386 addons.go:234] Setting addon default-storageclass=true in "multinode-026168"
	I0916 10:53:40.032605  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:53:40.032970  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:53:40.033254  150386 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:53:40.044917  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:40.062746  150386 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:53:40.062768  150386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:53:40.062836  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:53:40.079883  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:53:40.122609  150386 command_runner.go:130] > apiVersion: v1
	I0916 10:53:40.122632  150386 command_runner.go:130] > data:
	I0916 10:53:40.122639  150386 command_runner.go:130] >   Corefile: |
	I0916 10:53:40.122645  150386 command_runner.go:130] >     .:53 {
	I0916 10:53:40.122652  150386 command_runner.go:130] >         errors
	I0916 10:53:40.122660  150386 command_runner.go:130] >         health {
	I0916 10:53:40.122668  150386 command_runner.go:130] >            lameduck 5s
	I0916 10:53:40.122675  150386 command_runner.go:130] >         }
	I0916 10:53:40.122681  150386 command_runner.go:130] >         ready
	I0916 10:53:40.122690  150386 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0916 10:53:40.122703  150386 command_runner.go:130] >            pods insecure
	I0916 10:53:40.122711  150386 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0916 10:53:40.122723  150386 command_runner.go:130] >            ttl 30
	I0916 10:53:40.122732  150386 command_runner.go:130] >         }
	I0916 10:53:40.122738  150386 command_runner.go:130] >         prometheus :9153
	I0916 10:53:40.122749  150386 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0916 10:53:40.122757  150386 command_runner.go:130] >            max_concurrent 1000
	I0916 10:53:40.122767  150386 command_runner.go:130] >         }
	I0916 10:53:40.122773  150386 command_runner.go:130] >         cache 30
	I0916 10:53:40.122780  150386 command_runner.go:130] >         loop
	I0916 10:53:40.122789  150386 command_runner.go:130] >         reload
	I0916 10:53:40.122796  150386 command_runner.go:130] >         loadbalance
	I0916 10:53:40.122810  150386 command_runner.go:130] >     }
	I0916 10:53:40.122819  150386 command_runner.go:130] > kind: ConfigMap
	I0916 10:53:40.122825  150386 command_runner.go:130] > metadata:
	I0916 10:53:40.122838  150386 command_runner.go:130] >   creationTimestamp: "2024-09-16T10:53:34Z"
	I0916 10:53:40.122847  150386 command_runner.go:130] >   name: coredns
	I0916 10:53:40.122855  150386 command_runner.go:130] >   namespace: kube-system
	I0916 10:53:40.122864  150386 command_runner.go:130] >   resourceVersion: "231"
	I0916 10:53:40.122872  150386 command_runner.go:130] >   uid: e998cc8c-5131-4a5d-a9a1-432e2b6af9db
	I0916 10:53:40.125952  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:53:40.210364  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:53:40.216115  150386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:53:40.315121  150386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:53:40.608310  150386 command_runner.go:130] > configmap/coredns replaced
	I0916 10:53:40.614257  150386 start.go:971] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0916 10:53:40.614797  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.615106  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:53:40.615426  150386 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:53:40.615439  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.615447  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.615451  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.615950  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:53:40.616190  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:53:40.616456  150386 node_ready.go:35] waiting up to 6m0s for node "multinode-026168" to be "Ready" ...
	I0916 10:53:40.616538  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:40.616546  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.616553  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.616558  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.626184  150386 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 10:53:40.626207  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.626216  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.626221  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.626227  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.626230  150386 round_trippers.go:580]     Audit-Id: f9a41f42-7443-4f80-a0c1-43f4f109f6c3
	I0916 10:53:40.626226  150386 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:53:40.626254  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.626270  150386 round_trippers.go:580]     Audit-Id: 0548045c-00aa-4805-9049-9c5199b72073
	I0916 10:53:40.626275  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.626282  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.626293  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.626299  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.626308  150386 round_trippers.go:580]     Content-Length: 291
	I0916 10:53:40.626314  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.626235  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.626342  150386 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"340","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:40.626349  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.626520  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:40.626896  150386 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"340","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:40.626962  150386 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:53:40.626978  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.626988  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.626995  150386 round_trippers.go:473]     Content-Type: application/json
	I0916 10:53:40.627006  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.632007  150386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:53:40.632024  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.632032  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.632035  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.632038  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.632041  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.632045  150386 round_trippers.go:580]     Content-Length: 291
	I0916 10:53:40.632047  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.632050  150386 round_trippers.go:580]     Audit-Id: 44165c95-2095-4714-b953-3c36a7e400d6
	I0916 10:53:40.632066  150386 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"354","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:40.859067  150386 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0916 10:53:40.864917  150386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0916 10:53:40.871412  150386 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:53:40.877843  150386 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:53:40.885623  150386 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0916 10:53:40.893583  150386 command_runner.go:130] > pod/storage-provisioner created
	I0916 10:53:40.898202  150386 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0916 10:53:40.898297  150386 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:53:40.898322  150386 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:53:40.898404  150386 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:53:40.898414  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.898424  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.898429  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.902756  150386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:53:40.902779  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.902786  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.902791  150386 round_trippers.go:580]     Audit-Id: 32fa806b-d148-4927-b934-aba6392098c5
	I0916 10:53:40.902795  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.902798  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.902801  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.902806  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.902809  150386 round_trippers.go:580]     Content-Length: 1273
	I0916 10:53:40.902890  150386 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"373"},"items":[{"metadata":{"name":"standard","uid":"36c62ec6-ddea-48a1-9dc2-2da1904ffa1f","resourceVersion":"353","creationTimestamp":"2024-09-16T10:53:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:53:40.903237  150386 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"36c62ec6-ddea-48a1-9dc2-2da1904ffa1f","resourceVersion":"353","creationTimestamp":"2024-09-16T10:53:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:53:40.903283  150386 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:53:40.903292  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:40.903301  150386 round_trippers.go:473]     Content-Type: application/json
	I0916 10:53:40.903306  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:40.903308  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:40.906003  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:40.906026  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:40.906036  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:40.906041  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:40.906047  150386 round_trippers.go:580]     Content-Length: 1220
	I0916 10:53:40.906051  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:40 GMT
	I0916 10:53:40.906056  150386 round_trippers.go:580]     Audit-Id: 4fd6db17-21e5-4aec-8b6d-0ef0ff14fb81
	I0916 10:53:40.906062  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:40.906066  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:40.906097  150386 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"36c62ec6-ddea-48a1-9dc2-2da1904ffa1f","resourceVersion":"353","creationTimestamp":"2024-09-16T10:53:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:53:40.908589  150386 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:53:40.910350  150386 addons.go:510] duration metric: took 909.208755ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:53:41.116452  150386 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:53:41.116477  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:41.116485  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:41.116489  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:41.116640  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:41.116672  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:41.116684  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:41.116691  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:41.118874  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:41.118908  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:41.118920  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:41.118928  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:41.118933  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:41.118938  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:41.118945  150386 round_trippers.go:580]     Content-Length: 291
	I0916 10:53:41.118951  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:41 GMT
	I0916 10:53:41.118956  150386 round_trippers.go:580]     Audit-Id: 14700eff-8e81-414a-96de-3277b23c7acc
	I0916 10:53:41.118956  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:41.119028  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:41.119040  150386 round_trippers.go:580]     Audit-Id: fd4e6ea9-e690-4e72-a149-c9b8ee79d7fd
	I0916 10:53:41.119045  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:41.119049  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:41.119052  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:41.119056  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:41.119061  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:41 GMT
	I0916 10:53:41.118992  150386 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"214e801a-0760-43e2-9590-87dc9876a663","resourceVersion":"365","creationTimestamp":"2024-09-16T10:53:34Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0916 10:53:41.119190  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:41.119243  150386 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-026168" context rescaled to 1 replicas
	I0916 10:53:41.617105  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:41.617134  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:41.617142  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:41.617147  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:41.619550  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:41.619576  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:41.619585  150386 round_trippers.go:580]     Audit-Id: eba217dc-cf5e-453c-8d97-7d7bebdba7f2
	I0916 10:53:41.619589  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:41.619594  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:41.619598  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:41.619603  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:41.619609  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:41 GMT
	I0916 10:53:41.619784  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:42.117504  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:42.117530  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:42.117540  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:42.117543  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:42.119752  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:42.119775  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:42.119784  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:42.119788  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:42.119793  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:42.119799  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:42 GMT
	I0916 10:53:42.119803  150386 round_trippers.go:580]     Audit-Id: 1b6bc607-25a8-4eb2-94c2-669ae72227f6
	I0916 10:53:42.119807  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:42.119919  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:42.617283  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:42.617309  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:42.617318  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:42.617323  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:42.619627  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:42.619650  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:42.619657  150386 round_trippers.go:580]     Audit-Id: 1a21f591-014b-4e8a-a374-83658b7ace7a
	I0916 10:53:42.619665  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:42.619669  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:42.619672  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:42.619676  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:42.619680  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:42 GMT
	I0916 10:53:42.619783  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:42.620092  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:43.117401  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:43.117426  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:43.117437  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:43.117443  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:43.119565  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:43.119588  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:43.119597  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:43.119604  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:43.119608  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:43.119612  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:43 GMT
	I0916 10:53:43.119619  150386 round_trippers.go:580]     Audit-Id: e4cfefcf-91cd-441a-833b-d12723eb585e
	I0916 10:53:43.119623  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:43.119734  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:43.616944  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:43.616969  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:43.616976  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:43.616980  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:43.619154  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:43.619180  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:43.619190  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:43.619194  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:43.619198  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:43.619201  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:43.619203  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:43 GMT
	I0916 10:53:43.619206  150386 round_trippers.go:580]     Audit-Id: cc3120c2-4368-4109-b162-4462ae59da8e
	I0916 10:53:43.619409  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:44.117059  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:44.117088  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:44.117097  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:44.117100  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:44.119343  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:44.119369  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:44.119376  150386 round_trippers.go:580]     Audit-Id: 8c110a90-6a12-4fec-8811-94c426b77d70
	I0916 10:53:44.119379  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:44.119383  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:44.119386  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:44.119389  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:44.119394  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:44 GMT
	I0916 10:53:44.119508  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:44.616663  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:44.616688  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:44.616696  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:44.616701  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:44.618973  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:44.618995  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:44.619001  150386 round_trippers.go:580]     Audit-Id: 770b78cf-805b-43fb-8530-7c33082ba3bb
	I0916 10:53:44.619005  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:44.619008  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:44.619011  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:44.619014  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:44.619016  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:44 GMT
	I0916 10:53:44.619117  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:45.116767  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:45.116792  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:45.116800  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:45.116805  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:45.119314  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:45.119342  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:45.119350  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:45.119355  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:45 GMT
	I0916 10:53:45.119358  150386 round_trippers.go:580]     Audit-Id: 0ee4bef0-a16c-4708-a7a4-dfadfc5ccb46
	I0916 10:53:45.119361  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:45.119363  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:45.119369  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:45.119484  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:45.119999  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:45.617022  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:45.617046  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:45.617055  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:45.617059  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:45.619402  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:45.619422  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:45.619429  150386 round_trippers.go:580]     Audit-Id: cdced059-83aa-47fc-8f6f-45fe88339dea
	I0916 10:53:45.619432  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:45.619436  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:45.619441  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:45.619446  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:45.619450  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:45 GMT
	I0916 10:53:45.619591  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:46.117228  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:46.117251  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:46.117259  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:46.117262  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:46.119638  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:46.119659  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:46.119669  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:46.119674  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:46.119680  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:46.119684  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:46.119689  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:46 GMT
	I0916 10:53:46.119694  150386 round_trippers.go:580]     Audit-Id: f19037a1-764f-4e76-b3ec-4d94d9087b98
	I0916 10:53:46.119830  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:46.617095  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:46.617118  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:46.617126  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:46.617130  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:46.619352  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:46.619371  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:46.619378  150386 round_trippers.go:580]     Audit-Id: 12a0932c-dd20-4f12-8a40-8500af01b0aa
	I0916 10:53:46.619382  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:46.619384  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:46.619387  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:46.619390  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:46.619393  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:46 GMT
	I0916 10:53:46.619548  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:47.117142  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:47.117169  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:47.117177  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:47.117182  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:47.119467  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:47.119492  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:47.119502  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:47.119508  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:47.119513  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:47 GMT
	I0916 10:53:47.119518  150386 round_trippers.go:580]     Audit-Id: 4529e82f-203e-4e00-857e-1e2d1684de05
	I0916 10:53:47.119522  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:47.119526  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:47.119679  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:47.617369  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:47.617397  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:47.617405  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:47.617409  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:47.619634  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:47.619657  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:47.619666  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:47.619671  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:47 GMT
	I0916 10:53:47.619680  150386 round_trippers.go:580]     Audit-Id: 12feab5b-afae-4eb3-ad25-2a966d6200dc
	I0916 10:53:47.619685  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:47.619693  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:47.619696  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:47.619809  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:47.620109  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:48.117541  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:48.117570  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:48.117578  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:48.117583  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:48.119926  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:48.119951  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:48.119957  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:48.119963  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:48.119969  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:48.119973  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:48.119981  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:48 GMT
	I0916 10:53:48.119984  150386 round_trippers.go:580]     Audit-Id: 12def08e-855e-4272-8e5f-682d77355528
	I0916 10:53:48.120102  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:48.616746  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:48.616771  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:48.616778  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:48.616782  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:48.619202  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:48.619225  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:48.619234  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:48.619242  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:48.619247  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:48.619251  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:48.619254  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:48 GMT
	I0916 10:53:48.619258  150386 round_trippers.go:580]     Audit-Id: 517665d8-72ec-4dd0-926c-36104c9d5963
	I0916 10:53:48.619401  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:49.116961  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:49.116996  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:49.117004  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:49.117007  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:49.119412  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:49.119441  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:49.119451  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:49 GMT
	I0916 10:53:49.119456  150386 round_trippers.go:580]     Audit-Id: 9ae35bce-2097-4129-a372-362680001968
	I0916 10:53:49.119460  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:49.119469  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:49.119472  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:49.119478  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:49.119670  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:49.617424  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:49.617456  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:49.617468  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:49.617472  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:49.619722  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:49.619740  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:49.619746  150386 round_trippers.go:580]     Audit-Id: bfe53f2a-cab2-4d3c-834a-90af3ebd269d
	I0916 10:53:49.619751  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:49.619753  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:49.619756  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:49.619762  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:49.619766  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:49 GMT
	I0916 10:53:49.619927  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:49.620260  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:50.116728  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:50.116756  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:50.116764  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:50.116768  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:50.119178  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:50.119208  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:50.119218  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:50.119226  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:50.119233  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:50.119239  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:50 GMT
	I0916 10:53:50.119244  150386 round_trippers.go:580]     Audit-Id: d39ddcf8-996b-4e75-a653-a984a88a4d95
	I0916 10:53:50.119249  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:50.119352  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:50.616946  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:50.616971  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:50.616979  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:50.616984  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:50.619019  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:50.619037  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:50.619043  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:50.619047  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:50 GMT
	I0916 10:53:50.619049  150386 round_trippers.go:580]     Audit-Id: 885d5982-7134-4f63-9e57-3353514e2aa0
	I0916 10:53:50.619052  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:50.619054  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:50.619057  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:50.619248  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:51.117517  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:51.117548  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:51.117559  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:51.117565  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:51.119940  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:51.119960  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:51.119967  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:51.119970  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:51 GMT
	I0916 10:53:51.119973  150386 round_trippers.go:580]     Audit-Id: 590ad475-ebeb-4883-886f-00302fe65d3d
	I0916 10:53:51.119976  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:51.119979  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:51.119981  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:51.120171  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:51.616972  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:51.616999  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:51.617008  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:51.617013  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:51.619550  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:51.619571  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:51.619577  150386 round_trippers.go:580]     Audit-Id: 03cf0642-860b-42c3-b1d2-7006ea64714a
	I0916 10:53:51.619580  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:51.619584  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:51.619588  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:51.619593  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:51.619596  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:51 GMT
	I0916 10:53:51.619837  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:52.117501  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:52.117525  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:52.117533  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:52.117537  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:52.119864  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:52.119894  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:52.119904  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:52.119910  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:52.119915  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:52 GMT
	I0916 10:53:52.119920  150386 round_trippers.go:580]     Audit-Id: 938bb85d-2831-446e-a44d-bcbbcee136b0
	I0916 10:53:52.119923  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:52.119927  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:52.120088  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:52.120477  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:52.616688  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:52.616709  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:52.616716  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:52.616721  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:52.618998  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:52.619018  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:52.619025  150386 round_trippers.go:580]     Audit-Id: 23435aab-675e-477e-8d42-3a068f46a079
	I0916 10:53:52.619030  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:52.619033  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:52.619036  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:52.619038  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:52.619041  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:52 GMT
	I0916 10:53:52.619183  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:53.116723  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:53.116750  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:53.116758  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:53.116764  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:53.119088  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:53.119121  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:53.119132  150386 round_trippers.go:580]     Audit-Id: 63da5e7f-f2e8-4bb9-8a07-5fe007ff0a5b
	I0916 10:53:53.119139  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:53.119146  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:53.119157  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:53.119165  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:53.119171  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:53 GMT
	I0916 10:53:53.119306  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:53.616823  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:53.616849  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:53.616856  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:53.616859  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:53.619150  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:53.619169  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:53.619176  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:53.619179  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:53.619183  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:53 GMT
	I0916 10:53:53.619187  150386 round_trippers.go:580]     Audit-Id: e386e022-3a4a-4bbb-8b4c-43c8483579e9
	I0916 10:53:53.619190  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:53.619195  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:53.619321  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:54.116895  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:54.116921  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:54.116930  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:54.116935  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:54.119182  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:54.119201  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:54.119208  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:54.119211  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:54.119216  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:54.119219  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:54.119221  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:54 GMT
	I0916 10:53:54.119224  150386 round_trippers.go:580]     Audit-Id: 5aa37b20-b37d-46b8-8b98-82b05a31ce2e
	I0916 10:53:54.119388  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:54.617067  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:54.617101  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:54.617113  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:54.617118  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:54.619234  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:54.619254  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:54.619260  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:54.619264  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:54.619267  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:54 GMT
	I0916 10:53:54.619270  150386 round_trippers.go:580]     Audit-Id: 1c19a77a-39bd-4b3a-baaa-5ac3dd293d29
	I0916 10:53:54.619272  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:54.619275  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:54.619457  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:54.619843  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:55.117081  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:55.117106  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:55.117115  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:55.117119  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:55.119379  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:55.119404  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:55.119413  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:55.119419  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:55.119424  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:55 GMT
	I0916 10:53:55.119430  150386 round_trippers.go:580]     Audit-Id: d2ee37de-b302-4acc-8337-5b6537438e81
	I0916 10:53:55.119436  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:55.119442  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:55.119597  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:55.617047  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:55.617072  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:55.617090  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:55.617094  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:55.619435  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:55.619457  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:55.619465  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:55 GMT
	I0916 10:53:55.619470  150386 round_trippers.go:580]     Audit-Id: b95077a7-0a3f-4670-aff3-54d3926db2ae
	I0916 10:53:55.619474  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:55.619477  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:55.619481  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:55.619485  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:55.619585  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:56.116747  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:56.116773  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:56.116780  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:56.116784  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:56.119036  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:56.119057  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:56.119064  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:56.119069  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:56.119073  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:56 GMT
	I0916 10:53:56.119079  150386 round_trippers.go:580]     Audit-Id: b8184018-3755-40d8-b48a-5cc359d5313b
	I0916 10:53:56.119084  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:56.119087  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:56.119187  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:56.617245  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:56.617270  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:56.617278  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:56.617283  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:56.619756  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:56.619780  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:56.619788  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:56.619792  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:56.619796  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:56 GMT
	I0916 10:53:56.619801  150386 round_trippers.go:580]     Audit-Id: 23e8f8ed-1381-4a83-b8cc-121d8428adc8
	I0916 10:53:56.619806  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:56.619809  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:56.619984  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:56.620353  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:57.117692  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:57.117715  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:57.117724  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:57.117728  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:57.120019  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:57.120043  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:57.120052  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:57.120058  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:57.120063  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:57 GMT
	I0916 10:53:57.120067  150386 round_trippers.go:580]     Audit-Id: 8cf78437-1de9-4a85-9b9f-30670f0a7dc5
	I0916 10:53:57.120071  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:57.120074  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:57.120352  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:57.616926  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:57.616960  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:57.616970  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:57.616976  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:57.619372  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:57.619396  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:57.619404  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:57.619409  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:57.619413  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:57 GMT
	I0916 10:53:57.619417  150386 round_trippers.go:580]     Audit-Id: 1f1e36b3-1c7c-4544-8a9c-eb512aa82b6c
	I0916 10:53:57.619421  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:57.619426  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:57.619562  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:58.117247  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:58.117279  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:58.117290  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:58.117294  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:58.119568  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:58.119593  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:58.119603  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:58.119609  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:58 GMT
	I0916 10:53:58.119614  150386 round_trippers.go:580]     Audit-Id: 4fbcc7f0-7d08-411b-b127-0b0b663a6729
	I0916 10:53:58.119620  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:58.119624  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:58.119630  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:58.119788  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:58.617485  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:58.617510  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:58.617518  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:58.617523  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:58.619577  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:58.619600  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:58.619615  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:58 GMT
	I0916 10:53:58.619620  150386 round_trippers.go:580]     Audit-Id: 8244889b-a63e-4c50-b675-1ad681e4d690
	I0916 10:53:58.619624  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:58.619629  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:58.619634  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:58.619638  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:58.619803  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:59.117490  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:59.117514  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:59.117522  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:59.117525  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:59.119739  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:59.119760  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:59.119774  150386 round_trippers.go:580]     Audit-Id: 8291e611-cc2b-4443-a010-cb47dcfe3392
	I0916 10:53:59.119781  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:59.119786  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:59.119792  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:59.119797  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:59.119804  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:59 GMT
	I0916 10:53:59.119931  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:53:59.120229  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:53:59.617693  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:53:59.617714  150386 round_trippers.go:469] Request Headers:
	I0916 10:53:59.617722  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:53:59.617725  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:53:59.619896  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:53:59.619914  150386 round_trippers.go:577] Response Headers:
	I0916 10:53:59.619920  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:53:59.619924  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:53:59 GMT
	I0916 10:53:59.619927  150386 round_trippers.go:580]     Audit-Id: dfdf1042-375b-4e8a-bb7c-a2fa683ba77c
	I0916 10:53:59.619930  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:53:59.619932  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:53:59.619938  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:53:59.620080  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:00.116886  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:00.116916  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:00.116923  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:00.116932  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:00.119220  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:00.119238  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:00.119244  150386 round_trippers.go:580]     Audit-Id: 392b271a-95ed-4b56-ba55-057c71956cd4
	I0916 10:54:00.119248  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:00.119253  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:00.119257  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:00.119260  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:00.119264  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:00 GMT
	I0916 10:54:00.119387  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:00.617035  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:00.617060  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:00.617068  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:00.617072  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:00.619477  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:00.619507  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:00.619515  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:00.619520  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:00 GMT
	I0916 10:54:00.619525  150386 round_trippers.go:580]     Audit-Id: 9e2085c0-1961-4471-882c-48f50115b637
	I0916 10:54:00.619529  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:00.619535  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:00.619538  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:00.619745  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:01.117322  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:01.117358  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:01.117373  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:01.117379  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:01.119427  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:01.119450  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:01.119459  150386 round_trippers.go:580]     Audit-Id: d7282b5e-c8f0-476d-86bc-4d9ba3a6b0cc
	I0916 10:54:01.119463  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:01.119469  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:01.119474  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:01.119480  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:01.119485  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:01 GMT
	I0916 10:54:01.119610  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:01.617460  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:01.617491  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:01.617503  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:01.617509  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:01.620032  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:01.620061  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:01.620069  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:01.620076  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:01.620081  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:01.620085  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:01 GMT
	I0916 10:54:01.620090  150386 round_trippers.go:580]     Audit-Id: 7a6cbb8a-2690-4d9e-92ea-c5e9a72def47
	I0916 10:54:01.620094  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:01.620257  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:01.620558  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:02.116862  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:02.116889  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:02.116896  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:02.116903  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:02.119134  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:02.119153  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:02.119160  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:02.119167  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:02.119172  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:02.119176  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:02.119179  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:02 GMT
	I0916 10:54:02.119183  150386 round_trippers.go:580]     Audit-Id: 9b822245-ec73-4fa0-b7af-40f5bc4b2882
	I0916 10:54:02.119309  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:02.616880  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:02.616910  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:02.616919  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:02.616923  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:02.619117  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:02.619143  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:02.619153  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:02 GMT
	I0916 10:54:02.619158  150386 round_trippers.go:580]     Audit-Id: 9f94f6b8-d35a-467d-987c-81b16e427b7f
	I0916 10:54:02.619164  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:02.619171  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:02.619175  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:02.619180  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:02.619330  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:03.116891  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:03.116916  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:03.116923  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:03.116928  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:03.119204  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:03.119225  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:03.119231  150386 round_trippers.go:580]     Audit-Id: bae5e557-fc03-45d9-8a9d-d7867d46a500
	I0916 10:54:03.119239  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:03.119242  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:03.119244  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:03.119247  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:03.119249  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:03 GMT
	I0916 10:54:03.119351  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:03.616993  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:03.617025  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:03.617037  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:03.617043  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:03.619304  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:03.619327  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:03.619335  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:03.619339  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:03 GMT
	I0916 10:54:03.619342  150386 round_trippers.go:580]     Audit-Id: c0d34464-36ee-4376-a44c-8ee9c00b9017
	I0916 10:54:03.619345  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:03.619349  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:03.619351  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:03.619525  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:04.117212  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:04.117236  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:04.117244  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:04.117249  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:04.119600  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:04.119622  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:04.119633  150386 round_trippers.go:580]     Audit-Id: 45f293fa-f5b6-47e8-8490-79743ff5bc1a
	I0916 10:54:04.119636  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:04.119639  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:04.119641  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:04.119644  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:04.119646  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:04 GMT
	I0916 10:54:04.119837  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:04.120173  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:04.617468  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:04.617498  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:04.617506  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:04.617511  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:04.619521  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:04.619541  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:04.619548  150386 round_trippers.go:580]     Audit-Id: 2be5c4a2-e870-4bd5-abe3-83dd951a3b03
	I0916 10:54:04.619552  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:04.619557  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:04.619561  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:04.619564  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:04.619568  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:04 GMT
	I0916 10:54:04.619743  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:05.117458  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:05.117484  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:05.117492  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:05.117499  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:05.119659  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:05.119679  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:05.119686  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:05.119691  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:05.119695  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:05.119700  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:05 GMT
	I0916 10:54:05.119704  150386 round_trippers.go:580]     Audit-Id: 5e9214a7-0771-44ed-93a6-e554e9ddd410
	I0916 10:54:05.119708  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:05.119863  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:05.617548  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:05.617570  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:05.617577  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:05.617583  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:05.619759  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:05.619779  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:05.619788  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:05.619793  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:05.619796  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:05 GMT
	I0916 10:54:05.619799  150386 round_trippers.go:580]     Audit-Id: 8fe4f9c6-fa42-490c-9251-a4a9920d93b4
	I0916 10:54:05.619802  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:05.619805  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:05.619942  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:06.117627  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:06.117649  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:06.117658  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:06.117662  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:06.120388  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:06.120475  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:06.120497  150386 round_trippers.go:580]     Audit-Id: 9f49b3da-fef4-44b7-821c-1883547fa9a4
	I0916 10:54:06.120506  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:06.120527  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:06.120536  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:06.120540  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:06.120544  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:06 GMT
	I0916 10:54:06.120712  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:06.121171  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:06.617566  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:06.617590  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:06.617599  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:06.617604  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:06.619738  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:06.619757  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:06.619764  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:06.619767  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:06.619770  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:06.619774  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:06 GMT
	I0916 10:54:06.619776  150386 round_trippers.go:580]     Audit-Id: 268af30a-044b-4099-8c2d-81b72a2d5b84
	I0916 10:54:06.619779  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:06.619974  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:07.116654  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:07.116683  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:07.116692  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:07.116701  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:07.118906  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:07.118930  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:07.118940  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:07.118945  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:07.118951  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:07.118956  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:07.118961  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:07 GMT
	I0916 10:54:07.118965  150386 round_trippers.go:580]     Audit-Id: ae8d4514-f532-4b42-a139-617e17330272
	I0916 10:54:07.119105  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:07.616736  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:07.616762  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:07.616769  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:07.616774  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:07.619001  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:07.619022  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:07.619035  150386 round_trippers.go:580]     Audit-Id: 461d9fe1-f9b0-409f-b6b8-e0b29c479f23
	I0916 10:54:07.619040  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:07.619045  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:07.619048  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:07.619052  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:07.619057  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:07 GMT
	I0916 10:54:07.619217  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:08.116824  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:08.116850  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:08.116861  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:08.116868  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:08.119256  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:08.119285  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:08.119293  150386 round_trippers.go:580]     Audit-Id: 8d37ef0f-05ee-4f42-9fe1-80db8abf8df6
	I0916 10:54:08.119297  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:08.119300  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:08.119305  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:08.119308  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:08.119314  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:08 GMT
	I0916 10:54:08.119432  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:08.616964  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:08.617006  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:08.617016  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:08.617021  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:08.619206  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:08.619228  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:08.619237  150386 round_trippers.go:580]     Audit-Id: 338c18e5-bd4c-4adc-9e46-f52f0f9fe471
	I0916 10:54:08.619241  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:08.619246  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:08.619249  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:08.619253  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:08.619257  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:08 GMT
	I0916 10:54:08.619386  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:08.619714  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:09.116747  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:09.116771  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:09.116781  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:09.116787  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:09.119002  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:09.119022  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:09.119032  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:09.119037  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:09.119042  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:09.119047  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:09 GMT
	I0916 10:54:09.119051  150386 round_trippers.go:580]     Audit-Id: 944d5c8a-8d97-45f0-bf41-0cf7b23809c5
	I0916 10:54:09.119055  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:09.119173  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:09.616735  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:09.616761  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:09.616768  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:09.616772  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:09.619149  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:09.619175  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:09.619185  150386 round_trippers.go:580]     Audit-Id: 78ea15c6-485c-40cf-8958-1045974f90a8
	I0916 10:54:09.619189  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:09.619195  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:09.619198  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:09.619201  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:09.619204  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:09 GMT
	I0916 10:54:09.619327  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:10.117118  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:10.117140  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:10.117148  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:10.117152  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:10.119364  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:10.119392  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:10.119401  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:10.119407  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:10.119412  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:10 GMT
	I0916 10:54:10.119417  150386 round_trippers.go:580]     Audit-Id: dacb2d06-957b-4648-a200-d9676d52fc79
	I0916 10:54:10.119421  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:10.119424  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:10.119573  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:10.617257  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:10.617282  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:10.617290  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:10.617293  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:10.619517  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:10.619544  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:10.619553  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:10 GMT
	I0916 10:54:10.619558  150386 round_trippers.go:580]     Audit-Id: 78644a3f-da34-4061-8e55-763b6523fb12
	I0916 10:54:10.619591  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:10.619596  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:10.619601  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:10.619608  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:10.619785  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:10.620129  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:11.117544  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:11.117568  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:11.117598  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:11.117602  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:11.119835  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:11.119860  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:11.119868  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:11.119874  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:11 GMT
	I0916 10:54:11.119878  150386 round_trippers.go:580]     Audit-Id: a2e73f77-df1f-4e91-a413-7145e3790143
	I0916 10:54:11.119881  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:11.119886  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:11.119890  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:11.120068  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:11.616910  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:11.616932  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:11.616940  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:11.616944  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:11.619107  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:11.619129  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:11.619134  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:11.619139  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:11.619142  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:11 GMT
	I0916 10:54:11.619146  150386 round_trippers.go:580]     Audit-Id: d65d6da5-b57a-4909-8e30-5651b7705c5a
	I0916 10:54:11.619149  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:11.619155  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:11.619340  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:12.117004  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:12.117032  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:12.117040  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:12.117045  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:12.119461  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:12.119488  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:12.119499  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:12.119507  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:12 GMT
	I0916 10:54:12.119521  150386 round_trippers.go:580]     Audit-Id: f4b344ea-5d12-4855-bba7-702aeaddfd9c
	I0916 10:54:12.119527  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:12.119531  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:12.119535  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:12.119666  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:12.617119  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:12.617148  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:12.617158  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:12.617164  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:12.619328  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:12.619355  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:12.619363  150386 round_trippers.go:580]     Audit-Id: b3eb2aa8-bb22-4047-9839-d00e1f1ba713
	I0916 10:54:12.619367  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:12.619371  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:12.619375  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:12.619378  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:12.619384  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:12 GMT
	I0916 10:54:12.619560  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:13.117154  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:13.117181  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:13.117189  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:13.117194  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:13.119604  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:13.119630  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:13.119639  150386 round_trippers.go:580]     Audit-Id: c8f5cf60-c646-4679-801f-7ae2e5c3ba6d
	I0916 10:54:13.119644  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:13.119648  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:13.119651  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:13.119655  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:13.119659  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:13 GMT
	I0916 10:54:13.119839  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:13.120162  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:13.617539  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:13.617563  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:13.617573  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:13.617580  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:13.619962  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:13.619991  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:13.620001  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:13.620005  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:13 GMT
	I0916 10:54:13.620011  150386 round_trippers.go:580]     Audit-Id: 5c01f644-0736-4cb5-a8d4-13945e0fbf51
	I0916 10:54:13.620015  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:13.620021  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:13.620026  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:13.620197  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:14.116867  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:14.116908  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:14.116916  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:14.116919  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:14.119258  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:14.119285  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:14.119295  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:14.119303  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:14.119307  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:14.119311  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:14.119316  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:14 GMT
	I0916 10:54:14.119321  150386 round_trippers.go:580]     Audit-Id: ddf15265-f19a-4f14-9e84-803422b4fa29
	I0916 10:54:14.119425  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:14.616889  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:14.616914  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:14.616924  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:14.616930  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:14.618983  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:14.619009  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:14.619020  150386 round_trippers.go:580]     Audit-Id: 452ee33c-3ea0-42b2-b0bf-c04ce7660c10
	I0916 10:54:14.619024  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:14.619029  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:14.619033  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:14.619047  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:14.619054  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:14 GMT
	I0916 10:54:14.619170  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:15.116750  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:15.116776  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:15.116784  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:15.116788  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:15.119339  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:15.119366  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:15.119374  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:15 GMT
	I0916 10:54:15.119379  150386 round_trippers.go:580]     Audit-Id: dd1f1cc7-dc3d-4945-bae8-fa83ff662f3d
	I0916 10:54:15.119382  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:15.119385  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:15.119389  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:15.119393  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:15.119568  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:15.617310  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:15.617354  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:15.617362  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:15.617364  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:15.619707  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:15.619731  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:15.619740  150386 round_trippers.go:580]     Audit-Id: 02db9117-edca-4966-b561-7342514e4175
	I0916 10:54:15.619747  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:15.619750  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:15.619754  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:15.619758  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:15.619762  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:15 GMT
	I0916 10:54:15.619950  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:15.620279  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:16.117647  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:16.117670  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:16.117677  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:16.117682  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:16.120054  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:16.120076  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:16.120086  150386 round_trippers.go:580]     Audit-Id: 931dfbfa-accc-4152-bdb1-53ab7f374af9
	I0916 10:54:16.120097  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:16.120103  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:16.120107  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:16.120111  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:16.120115  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:16 GMT
	I0916 10:54:16.120226  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:16.617542  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:16.617564  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:16.617572  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:16.617576  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:16.619723  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:16.619744  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:16.619751  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:16.619756  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:16.619759  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:16.619762  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:16.619765  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:16 GMT
	I0916 10:54:16.619768  150386 round_trippers.go:580]     Audit-Id: ef38a36a-4590-42fc-8ed5-00d2b11d84c8
	I0916 10:54:16.619904  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:17.117559  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:17.117582  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:17.117589  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:17.117592  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:17.120089  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:17.120114  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:17.120121  150386 round_trippers.go:580]     Audit-Id: 0d7a5e53-75bf-48b4-abdc-156b2590e690
	I0916 10:54:17.120126  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:17.120129  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:17.120133  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:17.120137  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:17.120141  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:17 GMT
	I0916 10:54:17.120237  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:17.616741  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:17.616765  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:17.616773  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:17.616779  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:17.618939  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:17.618966  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:17.618978  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:17.618984  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:17.618990  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:17.618996  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:17.619006  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:17 GMT
	I0916 10:54:17.619015  150386 round_trippers.go:580]     Audit-Id: 12a35e31-01f4-4ec4-b7aa-0de50d15a224
	I0916 10:54:17.619199  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:18.116842  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:18.116869  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:18.116879  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:18.116885  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:18.119481  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:18.119503  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:18.119515  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:18.119521  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:18.119525  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:18.119529  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:18.119533  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:18 GMT
	I0916 10:54:18.119537  150386 round_trippers.go:580]     Audit-Id: b949a021-55e1-4612-a1b0-de9148805d85
	I0916 10:54:18.119700  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:18.120094  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:18.617319  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:18.617356  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:18.617364  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:18.617370  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:18.619648  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:18.619666  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:18.619672  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:18.619675  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:18.619680  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:18.619684  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:18.619687  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:18 GMT
	I0916 10:54:18.619689  150386 round_trippers.go:580]     Audit-Id: 3b62b805-566e-4c20-b23a-9bdb959ccbcd
	I0916 10:54:18.619882  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:19.117620  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:19.117648  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:19.117658  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:19.117663  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:19.119862  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:19.119885  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:19.119894  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:19.119898  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:19.119903  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:19 GMT
	I0916 10:54:19.119907  150386 round_trippers.go:580]     Audit-Id: 56da5455-d24c-4e1a-b8be-a418fdfd2f46
	I0916 10:54:19.119910  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:19.119913  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:19.120066  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:19.617693  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:19.617722  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:19.617733  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:19.617739  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:19.619865  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:19.619887  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:19.619896  150386 round_trippers.go:580]     Audit-Id: 401e4086-8cd7-4de9-96c7-cb5c47c7cc12
	I0916 10:54:19.619902  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:19.619907  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:19.619912  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:19.619915  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:19.619919  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:19 GMT
	I0916 10:54:19.620041  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:20.116892  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:20.116916  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:20.116922  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:20.116926  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:20.119182  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:20.119212  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:20.119219  150386 round_trippers.go:580]     Audit-Id: e8971d30-03fb-4857-95d8-51fe0dcd83f2
	I0916 10:54:20.119225  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:20.119232  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:20.119234  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:20.119239  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:20.119243  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:20 GMT
	I0916 10:54:20.119408  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:20.617078  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:20.617106  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:20.617118  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:20.617125  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:20.619372  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:20.619396  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:20.619409  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:20 GMT
	I0916 10:54:20.619416  150386 round_trippers.go:580]     Audit-Id: e07f39e0-3d8b-4184-8dba-16dcd388a3e4
	I0916 10:54:20.619422  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:20.619428  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:20.619433  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:20.619442  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:20.619576  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"304","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0916 10:54:20.619898  150386 node_ready.go:53] node "multinode-026168" has status "Ready":"False"
	I0916 10:54:21.117050  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.117075  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.117085  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.117089  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.119269  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.119291  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.119300  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.119307  150386 round_trippers.go:580]     Audit-Id: d94eb3b5-c407-4838-b258-f4c49214f94c
	I0916 10:54:21.119312  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.119315  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.119319  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.119324  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.119448  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.119866  150386 node_ready.go:49] node "multinode-026168" has status "Ready":"True"
	I0916 10:54:21.119886  150386 node_ready.go:38] duration metric: took 40.50340662s for node "multinode-026168" to be "Ready" ...
	I0916 10:54:21.119897  150386 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:21.119993  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:21.120006  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.120016  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.120023  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.122357  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.122379  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.122386  150386 round_trippers.go:580]     Audit-Id: 31eac586-78c2-4c69-b2e4-b36bdb0db681
	I0916 10:54:21.122395  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.122398  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.122401  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.122404  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.122408  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.122909  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"402","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59368 chars]
	I0916 10:54:21.127452  150386 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.127527  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:54:21.127533  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.127540  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.127545  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.129578  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.129597  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.129604  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.129610  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.129614  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.129618  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.129621  150386 round_trippers.go:580]     Audit-Id: 0695cdc4-bb80-4878-b510-951311f1c0c9
	I0916 10:54:21.129625  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.129747  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"402","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6701 chars]
	I0916 10:54:21.130168  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.130184  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.130194  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.130201  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.131803  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.131818  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.131824  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.131828  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.131831  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.131833  150386 round_trippers.go:580]     Audit-Id: c250efec-29fd-47db-be33-fca840c0d49b
	I0916 10:54:21.131836  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.131839  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.132180  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.628097  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:54:21.628128  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.628140  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.628145  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.630394  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.630421  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.630430  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.630436  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.630441  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.630446  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.630451  150386 round_trippers.go:580]     Audit-Id: 76206d57-1511-4733-a20a-f7846b30d399
	I0916 10:54:21.630455  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.630613  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:54:21.631075  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.631090  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.631099  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.631102  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.632849  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.632865  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.632871  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.632877  150386 round_trippers.go:580]     Audit-Id: ca24b158-aca0-4520-b5f1-66851865e9e1
	I0916 10:54:21.632881  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.632884  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.632893  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.632896  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.633021  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.633359  150386 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.633378  150386 pod_ready.go:82] duration metric: took 505.900424ms for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.633391  150386 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.633464  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:54:21.633474  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.633484  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.633496  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.635047  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.635060  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.635065  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.635069  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.635073  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.635076  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.635079  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.635083  150386 round_trippers.go:580]     Audit-Id: e25dddc3-2899-4dcf-b6c3-c2ebbf017b4a
	I0916 10:54:21.635202  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"382","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6435 chars]
	I0916 10:54:21.635522  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.635532  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.635539  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.635543  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.637033  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.637049  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.637056  150386 round_trippers.go:580]     Audit-Id: a402ba77-612a-4a78-9161-1f9af7dc14dc
	I0916 10:54:21.637059  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.637064  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.637067  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.637075  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.637082  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.637196  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.637568  150386 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.637585  150386 pod_ready.go:82] duration metric: took 4.183061ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.637602  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.637667  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:54:21.637678  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.637687  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.637694  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.639190  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.639200  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.639205  150386 round_trippers.go:580]     Audit-Id: 8d2d738a-85a4-4c4d-af29-f7632eaaf8fe
	I0916 10:54:21.639210  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.639215  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.639219  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.639223  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.639227  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.639415  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"384","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8513 chars]
	I0916 10:54:21.639783  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.639794  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.639801  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.639804  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.641136  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.641148  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.641154  150386 round_trippers.go:580]     Audit-Id: df3deefa-caff-4811-9e39-a5d826b48e18
	I0916 10:54:21.641157  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.641160  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.641164  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.641166  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.641169  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.641327  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.641616  150386 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.641632  150386 pod_ready.go:82] duration metric: took 4.0197ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.641643  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.641697  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:54:21.641707  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.641718  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.641724  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.643065  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.643082  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.643090  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.643097  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.643103  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.643107  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.643111  150386 round_trippers.go:580]     Audit-Id: fd45b8c3-1639-4c9c-9a3c-d1b60ed060af
	I0916 10:54:21.643119  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.643237  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"380","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8088 chars]
	I0916 10:54:21.643686  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.643701  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.643711  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.643717  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.644942  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:21.644955  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.644961  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.644964  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.644967  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.644970  150386 round_trippers.go:580]     Audit-Id: 796969a1-6899-441e-96f7-1ef8fe8ae578
	I0916 10:54:21.644973  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.644976  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.645122  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.645468  150386 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.645484  150386 pod_ready.go:82] duration metric: took 3.833778ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.645496  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.717891  150386 request.go:632] Waited for 72.3345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:21.717991  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:21.718003  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.718010  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.718015  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.720260  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.720288  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.720295  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.720299  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.720303  150386 round_trippers.go:580]     Audit-Id: 53ae8947-274b-4459-9e3c-cbaf6f154315
	I0916 10:54:21.720307  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.720312  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.720316  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.720465  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"348","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:54:21.917174  150386 request.go:632] Waited for 196.227739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.917256  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:21.917262  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:21.917269  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:21.917274  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:21.919941  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:21.919981  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:21.919991  150386 round_trippers.go:580]     Audit-Id: bd7a902b-75f5-47b6-a673-4bc31c4a42be
	I0916 10:54:21.919997  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:21.920001  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:21.920005  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:21.920009  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:21.920014  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:21 GMT
	I0916 10:54:21.920129  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:21.920479  150386 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:21.920497  150386 pod_ready.go:82] duration metric: took 274.994935ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:21.920507  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:22.117992  150386 request.go:632] Waited for 197.422651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:22.118062  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:22.118066  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.118074  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.118079  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.120308  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:22.120328  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.120334  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.120340  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.120346  150386 round_trippers.go:580]     Audit-Id: 12d10cd1-f471-40a4-b04b-552d91f6b9ab
	I0916 10:54:22.120350  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.120353  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.120357  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.120521  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"377","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4970 chars]
	I0916 10:54:22.318078  150386 request.go:632] Waited for 197.115028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:22.318145  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:22.318152  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.318159  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.318165  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.320651  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:22.320674  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.320681  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.320684  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.320687  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.320691  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.320694  150386 round_trippers.go:580]     Audit-Id: 70e0a499-d469-4a3f-8d56-398e020a712a
	I0916 10:54:22.320697  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.320887  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:22.321271  150386 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:22.321289  150386 pod_ready.go:82] duration metric: took 400.776828ms for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:22.321302  150386 pod_ready.go:39] duration metric: took 1.201386489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:22.321330  150386 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:54:22.321414  150386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:54:22.332357  150386 command_runner.go:130] > 1502
	I0916 10:54:22.332397  150386 api_server.go:72] duration metric: took 42.33117523s to wait for apiserver process to appear ...
	I0916 10:54:22.332407  150386 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:54:22.332431  150386 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:54:22.336925  150386 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:54:22.336986  150386 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 10:54:22.336991  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.336998  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.337002  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.337746  150386 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:54:22.337771  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.337781  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.337787  150386 round_trippers.go:580]     Audit-Id: 3d9f177d-85ed-463f-96f1-b9da4dd8452c
	I0916 10:54:22.337792  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.337798  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.337804  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.337810  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.337820  150386 round_trippers.go:580]     Content-Length: 263
	I0916 10:54:22.337841  150386 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:54:22.337950  150386 api_server.go:141] control plane version: v1.31.1
	I0916 10:54:22.337970  150386 api_server.go:131] duration metric: took 5.557199ms to wait for apiserver health ...
	I0916 10:54:22.337977  150386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:54:22.517192  150386 request.go:632] Waited for 179.154193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.517257  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.517262  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.517268  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.517273  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.520573  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:22.520600  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.520612  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.520619  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.520625  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.520629  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.520633  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.520636  150386 round_trippers.go:580]     Audit-Id: 31c363b0-1712-451f-81f2-cf95c81f3f77
	I0916 10:54:22.521223  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59444 chars]
	I0916 10:54:22.524175  150386 system_pods.go:59] 8 kube-system pods found
	I0916 10:54:22.524211  150386 system_pods.go:61] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running
	I0916 10:54:22.524217  150386 system_pods.go:61] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running
	I0916 10:54:22.524221  150386 system_pods.go:61] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:54:22.524225  150386 system_pods.go:61] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running
	I0916 10:54:22.524234  150386 system_pods.go:61] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running
	I0916 10:54:22.524239  150386 system_pods.go:61] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:54:22.524244  150386 system_pods.go:61] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:54:22.524250  150386 system_pods.go:61] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:54:22.524257  150386 system_pods.go:74] duration metric: took 186.274611ms to wait for pod list to return data ...
	I0916 10:54:22.524270  150386 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:54:22.717753  150386 request.go:632] Waited for 193.393723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:54:22.717852  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:54:22.717863  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.717874  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.717882  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.721139  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:22.721169  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.721177  150386 round_trippers.go:580]     Content-Length: 261
	I0916 10:54:22.721183  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.721187  150386 round_trippers.go:580]     Audit-Id: 2d2d0765-fe8f-4a12-ae5f-a890fee1ee4b
	I0916 10:54:22.721191  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.721196  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.721200  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.721204  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.721233  150386 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3f54840f-e917-4b73-aac8-060ce8f211be","resourceVersion":"325","creationTimestamp":"2024-09-16T10:53:39Z"}}]}
	I0916 10:54:22.721473  150386 default_sa.go:45] found service account: "default"
	I0916 10:54:22.721494  150386 default_sa.go:55] duration metric: took 197.218223ms for default service account to be created ...
	I0916 10:54:22.721507  150386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:54:22.917603  150386 request.go:632] Waited for 196.008334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.917692  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:22.917700  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:22.917710  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:22.917722  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:22.920897  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:22.920919  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:22.920926  150386 round_trippers.go:580]     Audit-Id: 59cb84a7-961b-4c43-b13a-5cdcd0ab7320
	I0916 10:54:22.920930  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:22.920933  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:22.920937  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:22.920940  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:22.920943  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:22 GMT
	I0916 10:54:22.921535  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59444 chars]
	I0916 10:54:22.923403  150386 system_pods.go:86] 8 kube-system pods found
	I0916 10:54:22.923430  150386 system_pods.go:89] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running
	I0916 10:54:22.923435  150386 system_pods.go:89] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running
	I0916 10:54:22.923439  150386 system_pods.go:89] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:54:22.923442  150386 system_pods.go:89] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running
	I0916 10:54:22.923446  150386 system_pods.go:89] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running
	I0916 10:54:22.923451  150386 system_pods.go:89] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:54:22.923455  150386 system_pods.go:89] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:54:22.923458  150386 system_pods.go:89] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:54:22.923463  150386 system_pods.go:126] duration metric: took 201.948979ms to wait for k8s-apps to be running ...
	I0916 10:54:22.923470  150386 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:54:22.923512  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:54:22.935482  150386 system_svc.go:56] duration metric: took 12.003954ms WaitForService to wait for kubelet
	I0916 10:54:22.935510  150386 kubeadm.go:582] duration metric: took 42.934287833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:22.935531  150386 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:54:23.117992  150386 request.go:632] Waited for 182.386401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:23.118099  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:23.118109  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:23.118120  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:23.118130  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:23.121007  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:23.121033  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:23.121043  150386 round_trippers.go:580]     Audit-Id: 13b6d3ea-0fca-4ca7-8081-ec0a3e9b8e01
	I0916 10:54:23.121051  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:23.121055  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:23.121059  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:23.121063  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:23.121067  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:23 GMT
	I0916 10:54:23.121274  150386 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0916 10:54:23.121686  150386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:54:23.121712  150386 node_conditions.go:123] node cpu capacity is 8
	I0916 10:54:23.121726  150386 node_conditions.go:105] duration metric: took 186.188965ms to run NodePressure ...
	I0916 10:54:23.121741  150386 start.go:241] waiting for startup goroutines ...
	I0916 10:54:23.121753  150386 start.go:246] waiting for cluster config update ...
	I0916 10:54:23.121771  150386 start.go:255] writing updated cluster config ...
	I0916 10:54:23.124160  150386 out.go:201] 
	I0916 10:54:23.125798  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:23.125924  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:54:23.127806  150386 out.go:177] * Starting "multinode-026168-m02" worker node in "multinode-026168" cluster
	I0916 10:54:23.129676  150386 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:54:23.131281  150386 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:54:23.132722  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:54:23.132755  150386 cache.go:56] Caching tarball of preloaded images
	I0916 10:54:23.132834  150386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:54:23.132867  150386 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:54:23.132883  150386 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:54:23.132994  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	W0916 10:54:23.153756  150386 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:54:23.153779  150386 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:54:23.153875  150386 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:54:23.153894  150386 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:54:23.153900  150386 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:54:23.153920  150386 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:54:23.153928  150386 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:54:23.155051  150386 image.go:273] response: 
	I0916 10:54:23.212231  150386 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:54:23.212268  150386 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:54:23.212308  150386 start.go:360] acquireMachinesLock for multinode-026168-m02: {Name:mk244ea9c32e56587b67dd9c9f2d4f0dcccd26e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:54:23.212428  150386 start.go:364] duration metric: took 97.765µs to acquireMachinesLock for "multinode-026168-m02"
	I0916 10:54:23.212460  150386 start.go:93] Provisioning new machine with config: &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:54:23.212535  150386 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:54:23.214703  150386 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:54:23.214819  150386 start.go:159] libmachine.API.Create for "multinode-026168" (driver="docker")
	I0916 10:54:23.214849  150386 client.go:168] LocalClient.Create starting
	I0916 10:54:23.214929  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 10:54:23.214972  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:23.214987  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:23.215035  150386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 10:54:23.215053  150386 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:23.215063  150386 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:23.215253  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:54:23.231940  150386 network_create.go:77] Found existing network {name:multinode-026168 subnet:0xc002012150 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0916 10:54:23.231978  150386 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-026168-m02" container
	I0916 10:54:23.232031  150386 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:54:23.247936  150386 cli_runner.go:164] Run: docker volume create multinode-026168-m02 --label name.minikube.sigs.k8s.io=multinode-026168-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:54:23.265752  150386 oci.go:103] Successfully created a docker volume multinode-026168-m02
	I0916 10:54:23.265835  150386 cli_runner.go:164] Run: docker run --rm --name multinode-026168-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168-m02 --entrypoint /usr/bin/test -v multinode-026168-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:54:23.761053  150386 oci.go:107] Successfully prepared a docker volume multinode-026168-m02
	I0916 10:54:23.761096  150386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:54:23.761121  150386 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:54:23.761183  150386 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:54:28.208705  150386 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-026168-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.447479357s)
	I0916 10:54:28.208743  150386 kic.go:203] duration metric: took 4.447620046s to extract preloaded images to volume ...
	W0916 10:54:28.208853  150386 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:54:28.208937  150386 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:54:28.258744  150386 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-026168-m02 --name multinode-026168-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-026168-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-026168-m02 --network multinode-026168 --ip 192.168.67.3 --volume multinode-026168-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:54:28.552494  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Running}}
	I0916 10:54:28.570713  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:54:28.589273  150386 cli_runner.go:164] Run: docker exec multinode-026168-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:54:28.632228  150386 oci.go:144] the created container "multinode-026168-m02" has a running status.
	I0916 10:54:28.632263  150386 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa...
	I0916 10:54:28.724402  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:54:28.724451  150386 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:54:28.745185  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:54:28.762081  150386 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:54:28.762103  150386 kic_runner.go:114] Args: [docker exec --privileged multinode-026168-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:54:28.807858  150386 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:54:28.824342  150386 machine.go:93] provisionDockerMachine start ...
	I0916 10:54:28.824429  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:28.843239  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:28.843559  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:28.843585  150386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:54:28.844383  150386 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51938->127.0.0.1:32908: read: connection reset by peer
	I0916 10:54:31.976892  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:54:31.976922  150386 ubuntu.go:169] provisioning hostname "multinode-026168-m02"
	I0916 10:54:31.976973  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:31.994091  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:31.994288  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:31.994304  150386 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168-m02 && echo "multinode-026168-m02" | sudo tee /etc/hostname
	I0916 10:54:32.140171  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:54:32.140251  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.157277  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:32.157465  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:32.157485  150386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:54:32.289554  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:54:32.289591  150386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:54:32.289616  150386 ubuntu.go:177] setting up certificates
	I0916 10:54:32.289631  150386 provision.go:84] configureAuth start
	I0916 10:54:32.289700  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:54:32.306551  150386 provision.go:143] copyHostCerts
	I0916 10:54:32.306588  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:54:32.306618  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:54:32.306624  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:54:32.306708  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:54:32.306801  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:54:32.306828  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:54:32.306837  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:54:32.306876  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:54:32.306945  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:54:32.306970  150386 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:54:32.306980  150386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:54:32.307014  150386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:54:32.307135  150386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-026168-m02]
	I0916 10:54:32.488245  150386 provision.go:177] copyRemoteCerts
	I0916 10:54:32.488298  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:54:32.488335  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.506446  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:32.602051  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:54:32.602141  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:54:32.623639  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:54:32.623701  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:54:32.646080  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:54:32.646141  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:54:32.668553  150386 provision.go:87] duration metric: took 378.909929ms to configureAuth
	I0916 10:54:32.668581  150386 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:54:32.668762  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:32.668869  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.687689  150386 main.go:141] libmachine: Using SSH client type: native
	I0916 10:54:32.687890  150386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:54:32.687908  150386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:54:32.911387  150386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:54:32.911413  150386 machine.go:96] duration metric: took 4.087048728s to provisionDockerMachine
	I0916 10:54:32.911423  150386 client.go:171] duration metric: took 9.696565035s to LocalClient.Create
	I0916 10:54:32.911442  150386 start.go:167] duration metric: took 9.696623047s to libmachine.API.Create "multinode-026168"
	I0916 10:54:32.911451  150386 start.go:293] postStartSetup for "multinode-026168-m02" (driver="docker")
	I0916 10:54:32.911464  150386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:54:32.911527  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:54:32.911563  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:32.929049  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.030331  150386 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:54:33.033229  150386 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:54:33.033271  150386 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:54:33.033283  150386 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:54:33.033292  150386 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:54:33.033301  150386 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:54:33.033307  150386 command_runner.go:130] > ID=ubuntu
	I0916 10:54:33.033313  150386 command_runner.go:130] > ID_LIKE=debian
	I0916 10:54:33.033323  150386 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:54:33.033328  150386 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:54:33.033362  150386 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:54:33.033376  150386 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:54:33.033385  150386 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:54:33.033452  150386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:54:33.033475  150386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:54:33.033482  150386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:54:33.033488  150386 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:54:33.033498  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:54:33.033548  150386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:54:33.033614  150386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:54:33.033622  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:54:33.033715  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:54:33.041732  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:54:33.063842  150386 start.go:296] duration metric: took 152.375443ms for postStartSetup
	I0916 10:54:33.064206  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:54:33.081271  150386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:54:33.081670  150386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:54:33.081714  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:33.099427  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.190562  150386 command_runner.go:130] > 30%
	I0916 10:54:33.190640  150386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:54:33.194859  150386 command_runner.go:130] > 204G
	I0916 10:54:33.195150  150386 start.go:128] duration metric: took 9.982603136s to createHost
	I0916 10:54:33.195175  150386 start.go:83] releasing machines lock for "multinode-026168-m02", held for 9.982732368s
	I0916 10:54:33.195248  150386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:54:33.214796  150386 out.go:177] * Found network options:
	I0916 10:54:33.216317  150386 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 10:54:33.217848  150386 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:54:33.217906  150386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:54:33.218001  150386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:54:33.218053  150386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:54:33.218061  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:33.218103  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:54:33.236009  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.236423  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:54:33.405768  150386 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:54:33.464179  150386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:54:33.468338  150386 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:54:33.468368  150386 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:54:33.468378  150386 command_runner.go:130] > Device: b7h/183d	Inode: 535096      Links: 1
	I0916 10:54:33.468384  150386 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:54:33.468390  150386 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:54:33.468395  150386 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:54:33.468399  150386 command_runner.go:130] > Change: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:54:33.468416  150386 command_runner.go:130] >  Birth: 2024-09-16 10:23:14.009756274 +0000
	I0916 10:54:33.468693  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:54:33.486323  150386 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:54:33.486417  150386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:54:33.513648  150386 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:54:33.513703  150386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:54:33.513713  150386 start.go:495] detecting cgroup driver to use...
	I0916 10:54:33.513749  150386 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:54:33.513797  150386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:54:33.528251  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:54:33.540275  150386 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:54:33.540343  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:54:33.552913  150386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:54:33.566361  150386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:54:33.639899  150386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:54:33.731263  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:54:33.731311  150386 docker.go:233] disabling docker service ...
	I0916 10:54:33.731365  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:54:33.749417  150386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:54:33.760326  150386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:54:33.843879  150386 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:54:33.843949  150386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:54:33.930022  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:54:33.930110  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:54:33.940911  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:54:33.956121  150386 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:54:33.956165  150386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:54:33.956211  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.966074  150386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:54:33.966138  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.975297  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.984512  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:33.993945  150386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:54:34.002689  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:34.012279  150386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:34.026984  150386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:54:34.036614  150386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:54:34.043858  150386 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:54:34.044465  150386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:54:34.052424  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:54:34.131587  150386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:54:34.245486  150386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:54:34.245562  150386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:54:34.248995  150386 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:54:34.249028  150386 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:54:34.249038  150386 command_runner.go:130] > Device: c0h/192d	Inode: 186         Links: 1
	I0916 10:54:34.249045  150386 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:54:34.249050  150386 command_runner.go:130] > Access: 2024-09-16 10:54:34.232046114 +0000
	I0916 10:54:34.249056  150386 command_runner.go:130] > Modify: 2024-09-16 10:54:34.232046114 +0000
	I0916 10:54:34.249061  150386 command_runner.go:130] > Change: 2024-09-16 10:54:34.232046114 +0000
	I0916 10:54:34.249065  150386 command_runner.go:130] >  Birth: -
	I0916 10:54:34.249111  150386 start.go:563] Will wait 60s for crictl version
	I0916 10:54:34.249160  150386 ssh_runner.go:195] Run: which crictl
	I0916 10:54:34.252370  150386 command_runner.go:130] > /usr/bin/crictl
	I0916 10:54:34.252469  150386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:54:34.284451  150386 command_runner.go:130] > Version:  0.1.0
	I0916 10:54:34.284476  150386 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:54:34.284480  150386 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:54:34.284486  150386 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:54:34.286613  150386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:54:34.286695  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:54:34.319283  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:54:34.319304  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:54:34.319313  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:54:34.319320  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:54:34.319329  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:54:34.319337  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:54:34.319343  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:54:34.319351  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:54:34.319357  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:54:34.319365  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:54:34.319369  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:54:34.319373  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:54:34.321161  150386 ssh_runner.go:195] Run: crio --version
	I0916 10:54:34.354614  150386 command_runner.go:130] > crio version 1.24.6
	I0916 10:54:34.354644  150386 command_runner.go:130] > Version:          1.24.6
	I0916 10:54:34.354656  150386 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:54:34.354664  150386 command_runner.go:130] > GitTreeState:     clean
	I0916 10:54:34.354672  150386 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:54:34.354679  150386 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:54:34.354686  150386 command_runner.go:130] > Compiler:         gc
	I0916 10:54:34.354694  150386 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:54:34.354702  150386 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:54:34.354716  150386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:54:34.354722  150386 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:54:34.354729  150386 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:54:34.356900  150386 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:54:34.358515  150386 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:54:34.359941  150386 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:54:34.377238  150386 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:54:34.380850  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:54:34.390936  150386 mustload.go:65] Loading cluster: multinode-026168
	I0916 10:54:34.391127  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:34.391324  150386 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:54:34.410822  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:54:34.411143  150386 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.3
	I0916 10:54:34.411160  150386 certs.go:194] generating shared ca certs ...
	I0916 10:54:34.411182  150386 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:54:34.411329  150386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:54:34.411392  150386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:54:34.411411  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:54:34.411433  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:54:34.411454  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:54:34.411477  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:54:34.411547  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:54:34.411599  150386 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:54:34.411613  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:54:34.411653  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:54:34.411690  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:54:34.411725  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:54:34.411788  150386 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:54:34.411828  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.411848  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.411867  150386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.411895  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:54:34.435909  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:54:34.458727  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:54:34.481625  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:54:34.502802  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:54:34.525129  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:54:34.547503  150386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:54:34.570192  150386 ssh_runner.go:195] Run: openssl version
	I0916 10:54:34.575429  150386 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:54:34.575514  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:54:34.584455  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.587759  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.587789  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.587825  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:54:34.593965  150386 command_runner.go:130] > 51391683
	I0916 10:54:34.594155  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:54:34.602965  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:54:34.611628  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.615051  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.615113  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.615162  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:54:34.621281  150386 command_runner.go:130] > 3ec20f2e
	I0916 10:54:34.621469  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:54:34.630305  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:54:34.639257  150386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.642542  150386 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.642573  150386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.642618  150386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:54:34.648922  150386 command_runner.go:130] > b5213941
	I0916 10:54:34.648987  150386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:54:34.657747  150386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:54:34.660935  150386 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:54:34.660982  150386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:54:34.661027  150386 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 crio false true} ...
	I0916 10:54:34.661126  150386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:54:34.661292  150386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:54:34.669451  150386 command_runner.go:130] > kubeadm
	I0916 10:54:34.669476  150386 command_runner.go:130] > kubectl
	I0916 10:54:34.669482  150386 command_runner.go:130] > kubelet
	I0916 10:54:34.669508  150386 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:54:34.669558  150386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:54:34.677633  150386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I0916 10:54:34.694198  150386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:54:34.710629  150386 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:54:34.714201  150386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:54:34.724359  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:54:34.799551  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:54:34.812199  150386 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:54:34.812442  150386 start.go:317] joinCluster: &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:54:34.812523  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:54:34.812562  150386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:54:34.831158  150386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:54:34.972349  150386 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token u9veb8.vmzv8qzigtxm2pxd --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 10:54:34.977238  150386 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:54:34.977276  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u9veb8.vmzv8qzigtxm2pxd --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-026168-m02"
	I0916 10:54:35.018804  150386 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:54:35.072593  150386 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:54:36.225928  150386 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:54:36.225960  150386 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:54:36.225972  150386 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:54:36.225980  150386 command_runner.go:130] > OS: Linux
	I0916 10:54:36.225988  150386 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:54:36.226001  150386 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:54:36.226011  150386 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:54:36.226021  150386 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:54:36.226031  150386 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:54:36.226043  150386 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:54:36.226058  150386 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:54:36.226069  150386 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:54:36.226080  150386 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:54:36.226091  150386 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0916 10:54:36.226103  150386 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0916 10:54:36.226123  150386 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:54:36.226138  150386 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:54:36.226149  150386 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:54:36.226170  150386 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:54:36.226182  150386 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001502695s
	I0916 10:54:36.226194  150386 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0916 10:54:36.226203  150386 command_runner.go:130] > This node has joined the cluster:
	I0916 10:54:36.226212  150386 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0916 10:54:36.226224  150386 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0916 10:54:36.226238  150386 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0916 10:54:36.226265  150386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u9veb8.vmzv8qzigtxm2pxd --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-026168-m02": (1.248974228s)
	I0916 10:54:36.226367  150386 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:54:36.390991  150386 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0916 10:54:36.391100  150386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-026168-m02 minikube.k8s.io/updated_at=2024_09_16T10_54_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-026168 minikube.k8s.io/primary=false
	I0916 10:54:36.460188  150386 command_runner.go:130] > node/multinode-026168-m02 labeled
	I0916 10:54:36.462930  150386 start.go:319] duration metric: took 1.650478524s to joinCluster
	I0916 10:54:36.463021  150386 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:54:36.463283  150386 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:54:36.464831  150386 out.go:177] * Verifying Kubernetes components...
	I0916 10:54:36.466257  150386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:54:36.546582  150386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:54:36.558067  150386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:54:36.558320  150386 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:54:36.558583  150386 node_ready.go:35] waiting up to 6m0s for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:54:36.558672  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:36.558683  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:36.558693  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:36.558699  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:36.561008  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:36.561026  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:36.561033  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:36.561036  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:36.561039  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:36 GMT
	I0916 10:54:36.561042  150386 round_trippers.go:580]     Audit-Id: 8f46dc76-ad7f-4da6-9680-019ddaa49119
	I0916 10:54:36.561046  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:36.561049  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:36.561236  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:37.058856  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:37.058880  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:37.058888  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:37.058893  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:37.060924  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:37.060944  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:37.060949  150386 round_trippers.go:580]     Audit-Id: 20752ce4-2144-4c1d-ad86-b2a8ceeaebe9
	I0916 10:54:37.060953  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:37.060956  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:37.060959  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:37.060961  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:37.060967  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:37 GMT
	I0916 10:54:37.061139  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:37.558759  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:37.558784  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:37.558791  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:37.558796  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:37.560947  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:37.560987  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:37.560997  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:37.561007  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:37.561012  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:37 GMT
	I0916 10:54:37.561018  150386 round_trippers.go:580]     Audit-Id: fc35c5c3-40bc-4e37-8dac-a02b2a41e9c0
	I0916 10:54:37.561022  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:37.561026  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:37.561125  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:38.059797  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:38.059821  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:38.059837  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:38.059846  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:38.063841  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:38.063870  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:38.063879  150386 round_trippers.go:580]     Audit-Id: a1919aa4-dc0b-4bf0-ab2b-38f72f0b0aa1
	I0916 10:54:38.063885  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:38.063891  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:38.063895  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:38.063900  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:38.063903  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:38 GMT
	I0916 10:54:38.064015  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:38.559776  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:38.559801  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:38.559809  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:38.559814  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:38.562182  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:38.562203  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:38.562211  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:38.562217  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:38.562221  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:38.562226  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:38.562229  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:38 GMT
	I0916 10:54:38.562233  150386 round_trippers.go:580]     Audit-Id: 094963d1-4f41-4386-bddc-a015db6d34d7
	I0916 10:54:38.562393  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"459","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"
f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f: [truncated 5537 chars]
	I0916 10:54:38.562733  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:39.058978  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:39.058997  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:39.059005  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:39.059009  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:39.061133  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:39.061151  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:39.061158  150386 round_trippers.go:580]     Audit-Id: f2fc3ed4-f50e-4e63-9c7a-d99444e39cd3
	I0916 10:54:39.061161  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:39.061165  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:39.061170  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:39.061174  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:39.061177  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:39 GMT
	I0916 10:54:39.061360  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:39.559785  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:39.559821  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:39.559833  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:39.559841  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:39.561493  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:39.561519  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:39.561529  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:39.561534  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:39 GMT
	I0916 10:54:39.561538  150386 round_trippers.go:580]     Audit-Id: a558e1f5-c0a8-49d9-979e-80f53674df2f
	I0916 10:54:39.561543  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:39.561548  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:39.561551  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:39.561740  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:40.059712  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:40.059734  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:40.059742  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:40.059746  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:40.062048  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:40.062074  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:40.062084  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:40.062089  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:40.062093  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:40 GMT
	I0916 10:54:40.062096  150386 round_trippers.go:580]     Audit-Id: 80503784-f557-4228-9717-6994d3d05b4f
	I0916 10:54:40.062100  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:40.062104  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:40.062271  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:40.558882  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:40.558914  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:40.558926  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:40.558930  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:40.561019  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:40.561040  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:40.561048  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:40.561054  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:40.561058  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:40.561062  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:40.561065  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:40 GMT
	I0916 10:54:40.561069  150386 round_trippers.go:580]     Audit-Id: f7d334f6-00c5-40a9-a183-175e0af44ddc
	I0916 10:54:40.561188  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:41.058836  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:41.058863  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:41.058871  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:41.058877  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:41.060988  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:41.061006  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:41.061012  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:41 GMT
	I0916 10:54:41.061016  150386 round_trippers.go:580]     Audit-Id: f15bd309-4105-4d0a-9630-4f241689b355
	I0916 10:54:41.061019  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:41.061023  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:41.061027  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:41.061032  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:41.061199  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:41.061532  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:41.559228  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:41.559252  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:41.559260  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:41.559266  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:41.561628  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:41.561651  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:41.561657  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:41 GMT
	I0916 10:54:41.561660  150386 round_trippers.go:580]     Audit-Id: 5ce89702-487f-452a-a40c-b44caae40ad6
	I0916 10:54:41.561663  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:41.561667  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:41.561670  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:41.561674  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:41.561863  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:42.059628  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:42.059655  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:42.059664  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:42.059668  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:42.061852  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:42.061876  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:42.061885  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:42.061889  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:42.061892  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:42 GMT
	I0916 10:54:42.061898  150386 round_trippers.go:580]     Audit-Id: f7735a59-280c-4558-a3ce-24e8a69394c7
	I0916 10:54:42.061903  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:42.061909  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:42.062063  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:42.558895  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:42.558924  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:42.558932  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:42.558937  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:42.561242  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:42.561264  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:42.561273  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:42.561279  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:42.561283  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:42.561287  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:42.561291  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:42 GMT
	I0916 10:54:42.561296  150386 round_trippers.go:580]     Audit-Id: 13e11d2a-bb0b-41f9-b7fa-7c53c43f221c
	I0916 10:54:42.561489  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:43.059093  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:43.059117  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:43.059124  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:43.059129  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:43.061547  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:43.061567  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:43.061574  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:43 GMT
	I0916 10:54:43.061581  150386 round_trippers.go:580]     Audit-Id: c5042587-91e3-427b-8603-77d3661b2276
	I0916 10:54:43.061586  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:43.061590  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:43.061594  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:43.061600  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:43.061771  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:43.062094  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:43.559575  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:43.559605  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:43.559614  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:43.559620  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:43.562211  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:43.562237  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:43.562247  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:43 GMT
	I0916 10:54:43.562252  150386 round_trippers.go:580]     Audit-Id: bfd8433c-34d9-47de-9883-d42ad0978123
	I0916 10:54:43.562258  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:43.562264  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:43.562269  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:43.562273  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:43.562442  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:44.058974  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:44.059000  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:44.059013  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:44.059017  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:44.061362  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:44.061383  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:44.061391  150386 round_trippers.go:580]     Audit-Id: 8f5e8125-cd52-4c90-922e-8fcde03efd6e
	I0916 10:54:44.061397  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:44.061403  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:44.061407  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:44.061410  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:44.061414  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:44 GMT
	I0916 10:54:44.061577  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:44.559156  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:44.559187  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:44.559197  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:44.559202  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:44.561479  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:44.561502  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:44.561508  150386 round_trippers.go:580]     Audit-Id: b0fc223c-7adf-45d1-8010-3cb4321a899d
	I0916 10:54:44.561512  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:44.561516  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:44.561519  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:44.561522  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:44.561524  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:44 GMT
	I0916 10:54:44.561760  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:45.059527  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:45.059554  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:45.059562  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:45.059568  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:45.062061  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:45.062082  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:45.062088  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:45.062092  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:45.062096  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:45.062098  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:45 GMT
	I0916 10:54:45.062101  150386 round_trippers.go:580]     Audit-Id: 68b32f38-eca8-4ef1-9a22-804d03651568
	I0916 10:54:45.062104  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:45.062286  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:45.062613  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:45.558926  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:45.558948  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:45.558956  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:45.558959  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:45.561097  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:45.561119  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:45.561127  150386 round_trippers.go:580]     Audit-Id: a0c59747-e495-48a3-b73c-18eb719b469f
	I0916 10:54:45.561133  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:45.561137  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:45.561142  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:45.561147  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:45.561151  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:45 GMT
	I0916 10:54:45.561310  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:46.058890  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:46.058920  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:46.058931  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:46.058937  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:46.061211  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:46.061234  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:46.061244  150386 round_trippers.go:580]     Audit-Id: e4c4262b-3cb8-4ac1-a365-135991a926cb
	I0916 10:54:46.061251  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:46.061257  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:46.061263  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:46.061269  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:46.061274  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:46 GMT
	I0916 10:54:46.061410  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"472","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5646 chars]
	I0916 10:54:46.559277  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:46.559301  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:46.559311  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:46.559318  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:46.562075  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:46.562096  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:46.562104  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:46.562110  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:46.562116  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:46.562122  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:46.562128  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:46 GMT
	I0916 10:54:46.562133  150386 round_trippers.go:580]     Audit-Id: 7790a5c4-e4ad-4dcb-a279-e32196f4ce24
	I0916 10:54:46.562401  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:47.059356  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:47.059389  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:47.059400  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:47.059405  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:47.061596  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:47.061621  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:47.061630  150386 round_trippers.go:580]     Audit-Id: bd72d213-b993-4f2e-a72e-bd57a0e93532
	I0916 10:54:47.061638  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:47.061643  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:47.061648  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:47.061653  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:47.061657  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:47 GMT
	I0916 10:54:47.061868  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:47.559518  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:47.559544  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:47.559553  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:47.559560  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:47.561928  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:47.561948  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:47.561955  150386 round_trippers.go:580]     Audit-Id: e2c3457d-da66-4879-a51c-b83355d8be98
	I0916 10:54:47.561958  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:47.561961  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:47.561965  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:47.561968  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:47.561970  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:47 GMT
	I0916 10:54:47.562158  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:47.562483  150386 node_ready.go:53] node "multinode-026168-m02" has status "Ready":"False"
	I0916 10:54:48.058791  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:48.058813  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:48.058821  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:48.058833  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:48.060746  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:48.060768  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:48.060776  150386 round_trippers.go:580]     Audit-Id: 4481350a-d61e-4437-a9b2-62502ba2f9d9
	I0916 10:54:48.060783  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:48.060788  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:48.060794  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:48.060798  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:48.060802  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:48 GMT
	I0916 10:54:48.060976  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:48.559733  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:48.559765  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:48.559775  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:48.559781  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:48.562175  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:48.562200  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:48.562207  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:48.562211  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:48.562213  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:48.562216  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:48.562219  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:48 GMT
	I0916 10:54:48.562221  150386 round_trippers.go:580]     Audit-Id: b00640ac-a1e7-4281-bb8e-112d8e2c8f12
	I0916 10:54:48.562391  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"485","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6038 chars]
	I0916 10:54:49.059002  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:49.059034  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.059043  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.059047  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.061464  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.061484  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.061493  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.061497  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.061500  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.061503  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.061506  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.061508  150386 round_trippers.go:580]     Audit-Id: 1e9847af-1950-4064-bb0f-c79ac1adf35f
	I0916 10:54:49.061700  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"489","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5855 chars]
	I0916 10:54:49.062013  150386 node_ready.go:49] node "multinode-026168-m02" has status "Ready":"True"
	I0916 10:54:49.062028  150386 node_ready.go:38] duration metric: took 12.503428835s for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:54:49.062036  150386 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:49.062099  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:54:49.062111  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.062118  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.062123  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.064914  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.064943  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.064953  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.064960  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.064967  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.064974  150386 round_trippers.go:580]     Audit-Id: b951a70a-3787-4e5f-a6d2-75a1ff6b3c9d
	I0916 10:54:49.064980  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.064985  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.065555  150386 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 74117 chars]
	I0916 10:54:49.067794  150386 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.067870  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:54:49.067878  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.067885  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.067889  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.069615  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.069630  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.069637  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.069642  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.069645  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.069647  150386 round_trippers.go:580]     Audit-Id: 823ce3ed-d62f-415c-a2f0-7f031d7725c3
	I0916 10:54:49.069650  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.069653  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.069893  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:54:49.070315  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.070328  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.070335  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.070338  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.072071  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.072090  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.072098  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.072103  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.072110  150386 round_trippers.go:580]     Audit-Id: 0dd16ca9-9963-4fa5-87e8-efa78837dd4c
	I0916 10:54:49.072115  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.072119  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.072130  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.072221  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.072556  150386 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.072575  150386 pod_ready.go:82] duration metric: took 4.758808ms for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.072586  150386 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.072638  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:54:49.072645  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.072652  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.072655  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.074334  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.074349  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.074358  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.074363  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.074370  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.074376  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.074380  150386 round_trippers.go:580]     Audit-Id: b4481839-1437-4079-a7a3-671987eb810d
	I0916 10:54:49.074384  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.074527  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"382","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6435 chars]
	I0916 10:54:49.074921  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.074936  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.074942  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.074947  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.076515  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.076533  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.076545  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.076551  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.076600  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.076615  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.076620  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.076626  150386 round_trippers.go:580]     Audit-Id: b17f7e59-8743-4a30-ac57-f79a52e5f01e
	I0916 10:54:49.076745  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.077123  150386 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.077141  150386 pod_ready.go:82] duration metric: took 4.549084ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.077158  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.077235  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:54:49.077243  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.077252  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.077261  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.078953  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.078970  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.078976  150386 round_trippers.go:580]     Audit-Id: 155ff9fe-5eaf-4de7-82af-3c85987dcef5
	I0916 10:54:49.078980  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.078984  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.078988  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.078990  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.078993  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.079177  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"384","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8513 chars]
	I0916 10:54:49.079576  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.079610  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.079617  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.079621  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.081136  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.081154  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.081163  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.081169  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.081175  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.081182  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.081186  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.081189  150386 round_trippers.go:580]     Audit-Id: 1602fedf-bf13-4ecd-9692-278af862ff3f
	I0916 10:54:49.081302  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.081726  150386 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.081748  150386 pod_ready.go:82] duration metric: took 4.578295ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.081760  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.081824  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:54:49.081835  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.081845  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.081852  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.083444  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.083458  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.083466  150386 round_trippers.go:580]     Audit-Id: ebadba2b-bdfe-4fc2-a1fd-95dfef4b8dca
	I0916 10:54:49.083472  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.083476  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.083485  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.083492  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.083496  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.083638  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"380","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8088 chars]
	I0916 10:54:49.084042  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.084054  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.084061  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.084065  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.085772  150386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:54:49.085793  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.085802  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.085808  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.085812  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.085818  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.085826  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.085832  150386 round_trippers.go:580]     Audit-Id: f3fd8af6-6b47-4fd2-8488-d5ae87a3a9ef
	I0916 10:54:49.085967  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.086247  150386 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.086261  150386 pod_ready.go:82] duration metric: took 4.494011ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.086273  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.259738  150386 request.go:632] Waited for 173.387504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:49.259815  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:54:49.259823  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.259833  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.259846  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.263334  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:49.263361  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.263372  150386 round_trippers.go:580]     Audit-Id: bf9492a2-83f6-43bf-b39c-837cb3fc7da5
	I0916 10:54:49.263376  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.263382  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.263387  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.263392  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.263397  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.263524  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"348","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:54:49.459341  150386 request.go:632] Waited for 195.349446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.459428  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:49.459442  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.459450  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.459455  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.461786  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.461806  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.461812  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.461815  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.461818  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.461821  150386 round_trippers.go:580]     Audit-Id: 1df62ff4-e821-41ac-888b-d13b62fe90cb
	I0916 10:54:49.461825  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.461829  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.461979  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:49.462297  150386 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.462310  150386 pod_ready.go:82] duration metric: took 376.031746ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.462321  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.659605  150386 request.go:632] Waited for 197.202161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:54:49.659663  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:54:49.659670  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.659680  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.659692  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.662224  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.662250  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.662260  150386 round_trippers.go:580]     Audit-Id: 4275301e-b3c5-412d-80f5-fdfb3775bb15
	I0916 10:54:49.662266  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.662271  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.662276  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.662280  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.662285  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.662438  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qds2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac30bd54-b932-4f52-a53c-4edbc5eefc7c","resourceVersion":"475","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:54:49.859219  150386 request.go:632] Waited for 196.277089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:49.859292  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:54:49.859299  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:49.859309  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:49.859314  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:49.861645  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:49.861668  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:49.861676  150386 round_trippers.go:580]     Audit-Id: 26bb0cab-ad8d-466c-b3f9-d15f0036fc7b
	I0916 10:54:49.861682  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:49.861688  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:49.861694  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:49.861699  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:49.861703  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:49 GMT
	I0916 10:54:49.861798  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"489","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5855 chars]
	I0916 10:54:49.862131  150386 pod_ready.go:93] pod "kube-proxy-qds2d" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:49.862148  150386 pod_ready.go:82] duration metric: took 399.820491ms for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:49.862157  150386 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:50.059477  150386 request.go:632] Waited for 197.252131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:50.059552  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:54:50.059560  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:50.059571  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:50.059580  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:50.062187  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:50.062227  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:50.062238  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:50.062245  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:50.062248  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:50.062251  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:50.062254  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:50 GMT
	I0916 10:54:50.062259  150386 round_trippers.go:580]     Audit-Id: 2416b370-bac5-4800-9b0b-8766b1fc1ef1
	I0916 10:54:50.062374  150386 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"377","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4970 chars]
	I0916 10:54:50.259053  150386 request.go:632] Waited for 196.284331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:50.259125  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:54:50.259131  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:50.259142  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:50.259148  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:50.261248  150386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:54:50.261270  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:50.261280  150386 round_trippers.go:580]     Audit-Id: defb4f0c-b73f-4075-9dc0-d352e539d7c6
	I0916 10:54:50.261289  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:50.261292  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:50.261296  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:50.261300  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:50.261306  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:50 GMT
	I0916 10:54:50.261428  150386 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0916 10:54:50.261831  150386 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:54:50.261855  150386 pod_ready.go:82] duration metric: took 399.68992ms for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:54:50.261870  150386 pod_ready.go:39] duration metric: took 1.199818746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:54:50.261891  150386 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:54:50.261955  150386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:54:50.272993  150386 system_svc.go:56] duration metric: took 11.095119ms WaitForService to wait for kubelet
	I0916 10:54:50.273031  150386 kubeadm.go:582] duration metric: took 13.809973833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:50.273053  150386 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:54:50.459512  150386 request.go:632] Waited for 186.380571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:50.459582  150386 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:54:50.459593  150386 round_trippers.go:469] Request Headers:
	I0916 10:54:50.459604  150386 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:54:50.459610  150386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:54:50.463070  150386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:54:50.463096  150386 round_trippers.go:577] Response Headers:
	I0916 10:54:50.463106  150386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:54:50.463112  150386 round_trippers.go:580]     Content-Type: application/json
	I0916 10:54:50.463116  150386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:54:50.463121  150386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:54:50.463124  150386 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:54:50 GMT
	I0916 10:54:50.463127  150386 round_trippers.go:580]     Audit-Id: 072c69b0-e276-417b-bb13-fd249f20d557
	I0916 10:54:50.463389  150386 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"490"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"396","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12847 chars]
	I0916 10:54:50.463881  150386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:54:50.463901  150386 node_conditions.go:123] node cpu capacity is 8
	I0916 10:54:50.463911  150386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:54:50.463915  150386 node_conditions.go:123] node cpu capacity is 8
	I0916 10:54:50.463919  150386 node_conditions.go:105] duration metric: took 190.861345ms to run NodePressure ...
	I0916 10:54:50.463931  150386 start.go:241] waiting for startup goroutines ...
	I0916 10:54:50.463953  150386 start.go:255] writing updated cluster config ...
	I0916 10:54:50.464253  150386 ssh_runner.go:195] Run: rm -f paused
	I0916 10:54:50.471683  150386 out.go:177] * Done! kubectl is now configured to use "multinode-026168" cluster and "default" namespace by default
	E0916 10:54:50.472866  150386 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:54:21 multinode-026168 crio[1036]: time="2024-09-16 10:54:21.408315288Z" level=info msg="Created container dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77: kube-system/coredns-7c65d6cfc9-s82cx/coredns" id=6d7639d5-6284-471f-af47-76a746536a1c name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:54:21 multinode-026168 crio[1036]: time="2024-09-16 10:54:21.408836594Z" level=info msg="Starting container: dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77" id=44368356-a791-46fb-a41d-2d3aba74d249 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:54:21 multinode-026168 crio[1036]: time="2024-09-16 10:54:21.417157923Z" level=info msg="Started container" PID=2271 containerID=dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77 description=kube-system/coredns-7c65d6cfc9-s82cx/coredns id=44368356-a791-46fb-a41d-2d3aba74d249 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c78b727e5ddd75d14e74c37444e462ebaceacb4ec9574635898675863c49c63c
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.439874619Z" level=info msg="Running pod sandbox: default/busybox-7dff88458-qt9rx/POD" id=49a45ee6-4723-4eb8-a4a5-5408040f0b07 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.439957676Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.453709535Z" level=info msg="Got pod network &{Name:busybox-7dff88458-qt9rx Namespace:default ID:13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa UID:d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3 NetNS:/var/run/netns/8450bee4-4a0f-46f6-aaa4-467251c3a5fa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.453739955Z" level=info msg="Adding pod default_busybox-7dff88458-qt9rx to CNI network \"kindnet\" (type=ptp)"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.463019830Z" level=info msg="Got pod network &{Name:busybox-7dff88458-qt9rx Namespace:default ID:13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa UID:d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3 NetNS:/var/run/netns/8450bee4-4a0f-46f6-aaa4-467251c3a5fa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.463183285Z" level=info msg="Checking pod default_busybox-7dff88458-qt9rx for CNI network kindnet (type=ptp)"
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.466461833Z" level=info msg="Ran pod sandbox 13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa with infra container: default/busybox-7dff88458-qt9rx/POD" id=49a45ee6-4723-4eb8-a4a5-5408040f0b07 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.467790445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=dd295c8b-dd40-479c-8547-e47a36230620 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.468038208Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=dd295c8b-dd40-479c-8547-e47a36230620 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.468952675Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7a209516-0b9d-4b6a-8357-bce55a486fff name=/runtime.v1.ImageService/PullImage
	Sep 16 10:54:51 multinode-026168 crio[1036]: time="2024-09-16 10:54:51.481368131Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:54:52 multinode-026168 crio[1036]: time="2024-09-16 10:54:52.332298901Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.205037193Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=7a209516-0b9d-4b6a-8357-bce55a486fff name=/runtime.v1.ImageService/PullImage
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.205834803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c834c4c1-abc5-47c2-b7a4-2fc9f75dcf48 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.206435687Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c834c4c1-abc5-47c2-b7a4-2fc9f75dcf48 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.207079460Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=cb50c416-41d5-4f4b-a655-678f57d80e69 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.207665053Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cb50c416-41d5-4f4b-a655-678f57d80e69 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.208315074Z" level=info msg="Creating container: default/busybox-7dff88458-qt9rx/busybox" id=dc98a5fe-66e1-4e11-9cce-ca804e1c1a75 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.208404944Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.251872423Z" level=info msg="Created container 83607811f9eb9ab48e0ee8d2c2a26d4614a56aa450821115b45ddf3d89706b72: default/busybox-7dff88458-qt9rx/busybox" id=dc98a5fe-66e1-4e11-9cce-ca804e1c1a75 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.252570812Z" level=info msg="Starting container: 83607811f9eb9ab48e0ee8d2c2a26d4614a56aa450821115b45ddf3d89706b72" id=a4a11726-fef5-4a87-b15b-84033f4e1b59 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:54:54 multinode-026168 crio[1036]: time="2024-09-16 10:54:54.258426207Z" level=info msg="Started container" PID=2437 containerID=83607811f9eb9ab48e0ee8d2c2a26d4614a56aa450821115b45ddf3d89706b72 description=default/busybox-7dff88458-qt9rx/busybox id=a4a11726-fef5-4a87-b15b-84033f4e1b59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13510cd1f15810cc6e086d3a03dbd3cbfa9654ab55384185948baf7590fc58aa
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	83607811f9eb9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   52 seconds ago       Running             busybox                   0                   13510cd1f1581       busybox-7dff88458-qt9rx
	dd488a7986689       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   0                   c78b727e5ddd7       coredns-7c65d6cfc9-s82cx
	8913755836cdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   d3faa1e799926       storage-provisioner
	94f816a173a35       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                0                   9ceb1b5d5a981       kube-proxy-6p6vt
	031615b88b45c       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               0                   bf2205a75f62c       kindnet-zv2p5
	8a997c9857a33       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Running             kube-controller-manager   0                   f3e447b209d6f       kube-controller-manager-multinode-026168
	fd0447db4a560       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            0                   4a4735b8eefdf       kube-apiserver-multinode-026168
	974f8e8c18191       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            0                   28d0d26f8e186       kube-scheduler-multinode-026168
	62d269db79164       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      0                   123e0f4195c8e       etcd-multinode-026168
	
	
	==> coredns [dd488a7986689a3b741c4640a0507a0bb14054b96a7c905ed64792e2e8aabd77] <==
	[INFO] 10.244.0.3:51187 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094239s
	[INFO] 10.244.1.2:45181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146935s
	[INFO] 10.244.1.2:59156 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002005499s
	[INFO] 10.244.1.2:39395 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101088s
	[INFO] 10.244.1.2:42528 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093197s
	[INFO] 10.244.1.2:33187 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001513712s
	[INFO] 10.244.1.2:33143 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127202s
	[INFO] 10.244.1.2:47467 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006244s
	[INFO] 10.244.1.2:56932 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085407s
	[INFO] 10.244.0.3:51617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140403s
	[INFO] 10.244.0.3:48759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081554s
	[INFO] 10.244.0.3:37584 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090321s
	[INFO] 10.244.0.3:59186 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065092s
	[INFO] 10.244.1.2:36167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135451s
	[INFO] 10.244.1.2:59973 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099974s
	[INFO] 10.244.1.2:58529 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060588s
	[INFO] 10.244.1.2:53665 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006045s
	[INFO] 10.244.0.3:45471 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119597s
	[INFO] 10.244.0.3:51073 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168151s
	[INFO] 10.244.0.3:37620 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118832s
	[INFO] 10.244.0.3:38968 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098574s
	[INFO] 10.244.1.2:41991 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140828s
	[INFO] 10.244.1.2:56798 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116768s
	[INFO] 10.244.1.2:55463 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080791s
	[INFO] 10.244.1.2:37704 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058286s
	
	
	==> describe nodes <==
	Name:               multinode-026168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_53_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:53:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:55:05 +0000   Mon, 16 Sep 2024 10:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-026168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 abcf2b5c41114d64bb158d3abc1bc1e7
	  System UUID:                8db2fd04-b5e4-4ec7-8d8e-d94280ac94a3
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qt9rx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 coredns-7c65d6cfc9-s82cx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m7s
	  kube-system                 etcd-multinode-026168                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m12s
	  kube-system                 kindnet-zv2p5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m7s
	  kube-system                 kube-apiserver-multinode-026168             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-multinode-026168    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-6p6vt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-multinode-026168             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m6s   kube-proxy       
	  Normal   Starting                 2m12s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m12s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m12s  kubelet          Node multinode-026168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m12s  kubelet          Node multinode-026168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m12s  kubelet          Node multinode-026168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m8s   node-controller  Node multinode-026168 event: Registered Node multinode-026168 in Controller
	  Normal   NodeReady                86s    kubelet          Node multinode-026168 status is now: NodeReady
	
	
	Name:               multinode-026168-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_54_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:54:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:55:06 +0000   Mon, 16 Sep 2024 10:54:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-026168-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 7732a396f8244d84817f5f8cac803842
	  System UUID:                50f4fbf1-c6a3-4700-a79b-bb8841197877
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z8csk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kindnet-mckv5              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      70s
	  kube-system                 kube-proxy-qds2d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  NodeHasSufficientMemory  70s (x2 over 71s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x2 over 71s)  kubelet          Node multinode-026168-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x2 over 71s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           68s                node-controller  Node multinode-026168-m02 event: Registered Node multinode-026168-m02 in Controller
	  Normal  NodeReady                58s                kubelet          Node multinode-026168-m02 status is now: NodeReady
	
	
	Name:               multinode-026168-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_55_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:55:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:55:20 +0000   Mon, 16 Sep 2024 10:55:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.4
	  Hostname:    multinode-026168-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdc0e75be17e4d3f9f9899b448a95dc1
	  System UUID:                df965121-0c57-4bf4-8c99-f55a28f729db
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2jtzj       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      40s
	  kube-system                 kube-proxy-g86bs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)  kubelet          Node multinode-026168-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)  kubelet          Node multinode-026168-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)  kubelet          Node multinode-026168-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node multinode-026168-m03 event: Registered Node multinode-026168-m03 in Controller
	  Normal  NodeReady                26s                kubelet          Node multinode-026168-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.095980] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004016] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +1.915832] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +4.031681] net_ratelimit: 5 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000002] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.255941] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000001] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004022] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +7.931402] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000006] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000002] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.004224] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-1162a04f8fb0
	[  +0.000005] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	[  +0.251741] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1162a04f8fb0
	[  +0.000008] ll header: 00000000: 02 42 5c 9f 3b 1f 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [62d269db791644dfcf7b38f0bcb3db1a486dd899cb5b8b1a7653839af3df554b] <==
	{"level":"info","ts":"2024-09-16T10:53:29.693859Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:53:29.693990Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:53:29.694028Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:53:29.634144Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:53:29.694641Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:53:29.821369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:53:29.821424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:53:29.821469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-09-16T10:53:29.821484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.821508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.821523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.821531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:53:29.822566Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.823322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:53:29.823322Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-026168 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:53:29.823391Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:53:29.823637Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:53:29.823662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:53:29.823745Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.823835Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.823877Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:53:29.824554Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:53:29.824646Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:53:29.825426Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:53:29.825454Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 10:55:46 up 38 min,  0 users,  load average: 0.93, 1.32, 1.01
	Linux multinode-026168 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [031615b88b45c13559d669c660a76b43765997b0548fcfa19fa2eea1c71beffc] <==
	I0916 10:55:10.594561       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:55:10.594641       1 main.go:299] handling current node
	I0916 10:55:10.594660       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:55:10.594668       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:10.594825       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:55:10.594846       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:55:10.594904       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.67.4 Flags: [] Table: 0} 
	I0916 10:55:20.594146       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:55:20.594181       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:20.594304       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:55:20.594312       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:55:20.594374       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:55:20.594386       1 main.go:299] handling current node
	I0916 10:55:30.597504       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:55:30.597547       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:55:30.597674       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:55:30.597694       1 main.go:299] handling current node
	I0916 10:55:30.597707       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:55:30.597730       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:40.594541       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:55:40.594606       1 main.go:299] handling current node
	I0916 10:55:40.594621       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:55:40.594626       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:40.594744       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:55:40.594752       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [fd0447db4a560a60ebcfda53d853a3e402c5897ca07bff9ef1397e4a880e4a17] <==
	I0916 10:53:32.762598       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:53:32.762617       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:53:33.235614       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:53:33.282392       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:53:33.363208       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:53:33.369405       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0916 10:53:33.370718       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:53:33.375137       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:53:33.817118       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:53:34.460123       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:53:34.469876       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:53:34.477929       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:53:38.819211       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:53:39.469789       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:53:39.469789       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 10:54:55.336047       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45634: use of closed network connection
	E0916 10:54:55.492122       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45658: use of closed network connection
	E0916 10:54:55.656364       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45670: use of closed network connection
	E0916 10:54:55.808526       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45686: use of closed network connection
	E0916 10:54:55.959936       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45708: use of closed network connection
	E0916 10:54:56.106197       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45722: use of closed network connection
	E0916 10:54:56.362300       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45748: use of closed network connection
	E0916 10:54:56.509506       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45772: use of closed network connection
	E0916 10:54:56.653981       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45792: use of closed network connection
	E0916 10:54:56.801852       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:45818: use of closed network connection
	
	
	==> kube-controller-manager [8a997c9857a33b254e0f727760c626327173dac57074563809c3087a43fee71e] <==
	I0916 10:54:51.193948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.348493ms"
	I0916 10:54:51.206532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.529479ms"
	I0916 10:54:51.206693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.307µs"
	I0916 10:54:53.585018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m02"
	I0916 10:54:54.577751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.687799ms"
	I0916 10:54:54.577867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.823µs"
	I0916 10:54:54.935158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.559253ms"
	I0916 10:54:54.935240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.35µs"
	I0916 10:55:05.944960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168"
	I0916 10:55:06.799587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m02"
	I0916 10:55:06.906228       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-026168-m03\" does not exist"
	I0916 10:55:06.906316       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-026168-m02"
	I0916 10:55:06.911934       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-026168-m03" podCIDRs=["10.244.2.0/24"]
	I0916 10:55:06.911967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:06.912035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:06.918648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:06.950803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:07.172627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:08.587705       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-026168-m03"
	I0916 10:55:08.626743       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:16.977212       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:20.053468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-026168-m02"
	I0916 10:55:20.053524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:20.062234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:55:23.600713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	
	
	==> kube-proxy [94f816a173a351d394edbe3db69798d9d3bc38225a8c8fda39ab554294fee17a] <==
	I0916 10:53:39.937150       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:53:40.100667       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:53:40.100747       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:53:40.214258       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:53:40.214346       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:53:40.216756       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:53:40.217451       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:53:40.217487       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:53:40.218742       1 config.go:199] "Starting service config controller"
	I0916 10:53:40.218780       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:53:40.218818       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:53:40.218829       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:53:40.219300       1 config.go:328] "Starting node config controller"
	I0916 10:53:40.219318       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:53:40.319906       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:53:40.319921       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:53:40.319994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [974f8e8c181912c331a9a90b937ad165217c9646d4dd4d80b604897509dbf716] <==
	W0916 10:53:31.918636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:31.918654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:31.918641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:53:31.918729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:53:31.918750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0916 10:53:31.918723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.771084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:53:32.771123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.817498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:53:32.817541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.858871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:53:32.858918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.860931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:53:32.860968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.908547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:53:32.908621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.912926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:32.912974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:32.967806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:32.967853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:53:33.036648       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:53:33.036691       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:53:33.056529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:53:33.056631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:53:36.015633       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.191236    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkn6c\" (UniqueName: \"kubernetes.io/projected/85130138-c50d-47a8-8bbe-de91bb9a0472-kube-api-access-tkn6c\") pod \"coredns-7c65d6cfc9-s82cx\" (UID: \"85130138-c50d-47a8-8bbe-de91bb9a0472\") " pod="kube-system/coredns-7c65d6cfc9-s82cx"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.191267    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvkl9\" (UniqueName: \"kubernetes.io/projected/ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7-kube-api-access-xvkl9\") pod \"storage-provisioner\" (UID: \"ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.191292    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85130138-c50d-47a8-8bbe-de91bb9a0472-config-volume\") pod \"coredns-7c65d6cfc9-s82cx\" (UID: \"85130138-c50d-47a8-8bbe-de91bb9a0472\") " pod="kube-system/coredns-7c65d6cfc9-s82cx"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.505008    1653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-s82cx" podStartSLOduration=42.504986406 podStartE2EDuration="42.504986406s" podCreationTimestamp="2024-09-16 10:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:54:21.50471358 +0000 UTC m=+47.256797721" watchObservedRunningTime="2024-09-16 10:54:21.504986406 +0000 UTC m=+47.257070546"
	Sep 16 10:54:21 multinode-026168 kubelet[1653]: I0916 10:54:21.513711    1653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.513685569 podStartE2EDuration="41.513685569s" podCreationTimestamp="2024-09-16 10:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:54:21.513315134 +0000 UTC m=+47.265399272" watchObservedRunningTime="2024-09-16 10:54:21.513685569 +0000 UTC m=+47.265769736"
	Sep 16 10:54:24 multinode-026168 kubelet[1653]: E0916 10:54:24.416435    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484064416225651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:24 multinode-026168 kubelet[1653]: E0916 10:54:24.416481    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484064416225651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:34 multinode-026168 kubelet[1653]: E0916 10:54:34.417526    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484074417280895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:34 multinode-026168 kubelet[1653]: E0916 10:54:34.417568    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484074417280895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:44 multinode-026168 kubelet[1653]: E0916 10:54:44.418580    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484084418371436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:44 multinode-026168 kubelet[1653]: E0916 10:54:44.418615    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484084418371436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:51 multinode-026168 kubelet[1653]: I0916 10:54:51.294084    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5cs\" (UniqueName: \"kubernetes.io/projected/d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3-kube-api-access-ts5cs\") pod \"busybox-7dff88458-qt9rx\" (UID: \"d57d4baf-c7d6-4ab6-aa3b-fda87c54a2b3\") " pod="default/busybox-7dff88458-qt9rx"
	Sep 16 10:54:54 multinode-026168 kubelet[1653]: E0916 10:54:54.420224    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484094420002220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:54 multinode-026168 kubelet[1653]: E0916 10:54:54.420272    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484094420002220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:54 multinode-026168 kubelet[1653]: I0916 10:54:54.571314    1653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-qt9rx" podStartSLOduration=0.832867296 podStartE2EDuration="3.571296442s" podCreationTimestamp="2024-09-16 10:54:51 +0000 UTC" firstStartedPulling="2024-09-16 10:54:51.46822216 +0000 UTC m=+77.220306291" lastFinishedPulling="2024-09-16 10:54:54.2066513 +0000 UTC m=+79.958735437" observedRunningTime="2024-09-16 10:54:54.571036616 +0000 UTC m=+80.323120755" watchObservedRunningTime="2024-09-16 10:54:54.571296442 +0000 UTC m=+80.323380581"
	Sep 16 10:55:04 multinode-026168 kubelet[1653]: E0916 10:55:04.421472    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484104421256152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:04 multinode-026168 kubelet[1653]: E0916 10:55:04.421518    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484104421256152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:14 multinode-026168 kubelet[1653]: E0916 10:55:14.423046    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484114422875682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:14 multinode-026168 kubelet[1653]: E0916 10:55:14.423086    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484114422875682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:24 multinode-026168 kubelet[1653]: E0916 10:55:24.424503    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484124424213342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:24 multinode-026168 kubelet[1653]: E0916 10:55:24.424586    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484124424213342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:34 multinode-026168 kubelet[1653]: E0916 10:55:34.425732    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484134425529565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:34 multinode-026168 kubelet[1653]: E0916 10:55:34.425785    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484134425529565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:44 multinode-026168 kubelet[1653]: E0916 10:55:44.426863    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484144426640024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:44 multinode-026168 kubelet[1653]: E0916 10:55:44.427525    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484144426640024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-026168 -n multinode-026168
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (508.336µs)
helpers_test.go:263: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/StartAfterStop (11.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (7.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 node delete m03: (4.651350556s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:436: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (494.787µs)
multinode_test.go:438: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-026168
helpers_test.go:235: (dbg) docker inspect multinode-026168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74",
	        "Created": "2024-09-16T10:53:21.752929602Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 167840,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:56:12.907878157Z",
	            "FinishedAt": "2024-09-16T10:56:12.197549064Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hostname",
	        "HostsPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hosts",
	        "LogPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74-json.log",
	        "Name": "/multinode-026168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-026168:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-026168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-026168",
	                "Source": "/var/lib/docker/volumes/multinode-026168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-026168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-026168",
	                "name.minikube.sigs.k8s.io": "multinode-026168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "88e8365ee9161a6b9fe3aa957c34abefab9b768e78eee9e5f1c0a8d8d21175fb",
	            "SandboxKey": "/var/run/docker/netns/88e8365ee916",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32923"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32924"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32927"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32926"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-026168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a5a173559814a989877e5b7826f3cf7f4df5f065fe1cdcc6350cf486bc64e678",
	                    "EndpointID": "c6a8acb8d5a0d79abf18b47767da60229903dd30e64196b8c64836e5c35f2cbe",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-026168",
	                        "23ba806c0524"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-026168 -n multinode-026168
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 logs -n 25: (1.413507502s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168:/home/docker/cp-test_multinode-026168-m02_multinode-026168.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168 sudo cat                                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m02_multinode-026168.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03:/home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168-m03 sudo cat                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp testdata/cp-test.txt                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168:/home/docker/cp-test_multinode-026168-m03_multinode-026168.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168 sudo cat                                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m03_multinode-026168.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02:/home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168-m02 sudo cat                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-026168 node stop m03                                                          | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	| node    | multinode-026168 node start                                                             | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-026168                                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC |                     |
	| stop    | -p multinode-026168                                                                     | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:56 UTC |
	| start   | -p multinode-026168                                                                     | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-026168                                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC |                     |
	| node    | multinode-026168 node delete                                                            | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:56:12
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:56:12.517816  167544 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:56:12.517945  167544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:12.517956  167544 out.go:358] Setting ErrFile to fd 2...
	I0916 10:56:12.517962  167544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:12.518234  167544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:56:12.518921  167544 out.go:352] Setting JSON to false
	I0916 10:56:12.520116  167544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2313,"bootTime":1726481860,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:56:12.520242  167544 start.go:139] virtualization: kvm guest
	I0916 10:56:12.522926  167544 out.go:177] * [multinode-026168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:56:12.524346  167544 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:56:12.524350  167544 notify.go:220] Checking for updates...
	I0916 10:56:12.525889  167544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:56:12.527437  167544 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:56:12.529154  167544 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:56:12.530721  167544 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:56:12.532439  167544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:56:12.534390  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:56:12.534479  167544 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:56:12.558465  167544 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:56:12.558597  167544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:56:12.611336  167544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:56:12.601152536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:56:12.611486  167544 docker.go:318] overlay module found
	I0916 10:56:12.613758  167544 out.go:177] * Using the docker driver based on existing profile
	I0916 10:56:12.615277  167544 start.go:297] selected driver: docker
	I0916 10:56:12.615301  167544 start.go:901] validating driver "docker" against &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false ku
bevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:12.615461  167544 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:56:12.615551  167544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:56:12.667480  167544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:56:12.655645278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:56:12.668109  167544 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:56:12.668140  167544 cni.go:84] Creating CNI manager for ""
	I0916 10:56:12.668170  167544 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:56:12.668221  167544 start.go:340] cluster config:
	{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-devic
e-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0916 10:56:12.670564  167544 out.go:177] * Starting "multinode-026168" primary control-plane node in "multinode-026168" cluster
	I0916 10:56:12.671929  167544 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:56:12.673537  167544 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:56:12.675057  167544 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:56:12.675098  167544 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:56:12.675116  167544 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:56:12.675127  167544 cache.go:56] Caching tarball of preloaded images
	I0916 10:56:12.675242  167544 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:56:12.675257  167544 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:56:12.675381  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	W0916 10:56:12.695313  167544 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:56:12.695331  167544 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:56:12.695397  167544 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:56:12.695408  167544 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:56:12.695415  167544 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:56:12.695422  167544 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:56:12.695429  167544 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:56:12.696567  167544 image.go:273] response: 
	I0916 10:56:12.772533  167544 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:56:12.772590  167544 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:56:12.772634  167544 start.go:360] acquireMachinesLock for multinode-026168: {Name:mk1016c8f1a43c2d6030796baf01aa33f86316e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:56:12.772726  167544 start.go:364] duration metric: took 63.095µs to acquireMachinesLock for "multinode-026168"
	I0916 10:56:12.772750  167544 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:56:12.772761  167544 fix.go:54] fixHost starting: 
	I0916 10:56:12.773064  167544 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:56:12.790732  167544 fix.go:112] recreateIfNeeded on multinode-026168: state=Stopped err=<nil>
	W0916 10:56:12.790784  167544 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:56:12.793214  167544 out.go:177] * Restarting existing docker container for "multinode-026168" ...
	I0916 10:56:12.794852  167544 cli_runner.go:164] Run: docker start multinode-026168
	I0916 10:56:13.081200  167544 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:56:13.100754  167544 kic.go:430] container "multinode-026168" state is running.
	I0916 10:56:13.101154  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:56:13.120294  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:56:13.120700  167544 machine.go:93] provisionDockerMachine start ...
	I0916 10:56:13.120789  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:13.139507  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:13.139742  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0916 10:56:13.139758  167544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:56:13.140470  167544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34562->127.0.0.1:32923: read: connection reset by peer
	I0916 10:56:16.273055  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:56:16.273085  167544 ubuntu.go:169] provisioning hostname "multinode-026168"
	I0916 10:56:16.273141  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:16.290776  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:16.290976  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0916 10:56:16.290990  167544 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168 && echo "multinode-026168" | sudo tee /etc/hostname
	I0916 10:56:16.432458  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:56:16.432530  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:16.449641  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:16.449866  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0916 10:56:16.449884  167544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:56:16.581578  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:56:16.581617  167544 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:56:16.581648  167544 ubuntu.go:177] setting up certificates
	I0916 10:56:16.581663  167544 provision.go:84] configureAuth start
	I0916 10:56:16.581741  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:56:16.598943  167544 provision.go:143] copyHostCerts
	I0916 10:56:16.598987  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:56:16.599027  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:56:16.599039  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:56:16.599114  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:56:16.599210  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:56:16.599236  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:56:16.599245  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:56:16.599282  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:56:16.599342  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:56:16.599366  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:56:16.599375  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:56:16.599408  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:56:16.599475  167544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-026168]
	I0916 10:56:16.722005  167544 provision.go:177] copyRemoteCerts
	I0916 10:56:16.722070  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:56:16.722105  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:16.740251  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:56:16.834525  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:56:16.834589  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:56:16.857227  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:56:16.857285  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 10:56:16.879362  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:56:16.879425  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:56:16.901401  167544 provision.go:87] duration metric: took 319.718367ms to configureAuth
	I0916 10:56:16.901435  167544 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:56:16.901675  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:56:16.901769  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:16.919175  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:16.919340  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0916 10:56:16.919358  167544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:56:17.222536  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:56:17.222570  167544 machine.go:96] duration metric: took 4.101843203s to provisionDockerMachine
	I0916 10:56:17.222584  167544 start.go:293] postStartSetup for "multinode-026168" (driver="docker")
	I0916 10:56:17.222598  167544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:56:17.222668  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:56:17.222729  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:17.242251  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:56:17.338250  167544 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:56:17.341182  167544 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:56:17.341203  167544 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:56:17.341212  167544 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:56:17.341220  167544 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:56:17.341228  167544 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:56:17.341234  167544 command_runner.go:130] > ID=ubuntu
	I0916 10:56:17.341240  167544 command_runner.go:130] > ID_LIKE=debian
	I0916 10:56:17.341248  167544 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:56:17.341256  167544 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:56:17.341268  167544 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:56:17.341277  167544 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:56:17.341284  167544 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:56:17.341355  167544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:56:17.341389  167544 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:56:17.341399  167544 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:56:17.341407  167544 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:56:17.341419  167544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:56:17.341474  167544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:56:17.341556  167544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:56:17.341566  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:56:17.341643  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:56:17.349399  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:56:17.371416  167544 start.go:296] duration metric: took 148.818475ms for postStartSetup
	I0916 10:56:17.371495  167544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:56:17.371555  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:17.388938  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:56:17.478463  167544 command_runner.go:130] > 31%
	I0916 10:56:17.478523  167544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:56:17.482749  167544 command_runner.go:130] > 203G
	I0916 10:56:17.482782  167544 fix.go:56] duration metric: took 4.710020045s for fixHost
	I0916 10:56:17.482794  167544 start.go:83] releasing machines lock for "multinode-026168", held for 4.710055174s
	I0916 10:56:17.482862  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:56:17.500093  167544 ssh_runner.go:195] Run: cat /version.json
	I0916 10:56:17.500138  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:17.500197  167544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:56:17.500256  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:56:17.518027  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:56:17.518772  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:56:17.682091  167544 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:56:17.684282  167544 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:56:17.684413  167544 ssh_runner.go:195] Run: systemctl --version
	I0916 10:56:17.688411  167544 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:56:17.688447  167544 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:56:17.688500  167544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:56:17.825622  167544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:56:17.829829  167544 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf.mk_disabled
	I0916 10:56:17.829857  167544 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:56:17.829868  167544 command_runner.go:130] > Device: 37h/55d	Inode: 535096      Links: 1
	I0916 10:56:17.829880  167544 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:17.829887  167544 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:17.829892  167544 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:17.829897  167544 command_runner.go:130] > Change: 2024-09-16 10:53:24.206895094 +0000
	I0916 10:56:17.829902  167544 command_runner.go:130] >  Birth: 2024-09-16 10:53:24.202894799 +0000
	I0916 10:56:17.829958  167544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:17.837905  167544 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:56:17.837976  167544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:17.846626  167544 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:56:17.846652  167544 start.go:495] detecting cgroup driver to use...
	I0916 10:56:17.846681  167544 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:56:17.846720  167544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:56:17.857623  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:56:17.867898  167544 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:56:17.867953  167544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:56:17.879580  167544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:56:17.889837  167544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:56:17.966322  167544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:56:18.045294  167544 docker.go:233] disabling docker service ...
	I0916 10:56:18.045372  167544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:56:18.056640  167544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:56:18.066875  167544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:56:18.150212  167544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:56:18.222142  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:56:18.232826  167544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:56:18.246705  167544 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:56:18.247595  167544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:56:18.247647  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:18.256525  167544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:56:18.256577  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:18.265824  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:18.275033  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:18.283932  167544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:56:18.292711  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:18.302262  167544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:18.311425  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:18.320541  167544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:56:18.327497  167544 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:56:18.328160  167544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:56:18.335679  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:18.405437  167544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:56:18.516586  167544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:56:18.516647  167544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:56:18.519949  167544 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:56:18.519969  167544 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:56:18.519978  167544 command_runner.go:130] > Device: 40h/64d	Inode: 207         Links: 1
	I0916 10:56:18.519988  167544 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:18.519999  167544 command_runner.go:130] > Access: 2024-09-16 10:56:18.503716294 +0000
	I0916 10:56:18.520010  167544 command_runner.go:130] > Modify: 2024-09-16 10:56:18.503716294 +0000
	I0916 10:56:18.520018  167544 command_runner.go:130] > Change: 2024-09-16 10:56:18.503716294 +0000
	I0916 10:56:18.520022  167544 command_runner.go:130] >  Birth: -
	I0916 10:56:18.520049  167544 start.go:563] Will wait 60s for crictl version
	I0916 10:56:18.520084  167544 ssh_runner.go:195] Run: which crictl
	I0916 10:56:18.523179  167544 command_runner.go:130] > /usr/bin/crictl
	I0916 10:56:18.523234  167544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:56:18.554344  167544 command_runner.go:130] > Version:  0.1.0
	I0916 10:56:18.554373  167544 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:56:18.554380  167544 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:56:18.554390  167544 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:56:18.556857  167544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:56:18.556913  167544 ssh_runner.go:195] Run: crio --version
	I0916 10:56:18.590662  167544 command_runner.go:130] > crio version 1.24.6
	I0916 10:56:18.590690  167544 command_runner.go:130] > Version:          1.24.6
	I0916 10:56:18.590701  167544 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:56:18.590709  167544 command_runner.go:130] > GitTreeState:     clean
	I0916 10:56:18.590719  167544 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:56:18.590730  167544 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:56:18.590739  167544 command_runner.go:130] > Compiler:         gc
	I0916 10:56:18.590749  167544 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:56:18.590757  167544 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:56:18.590766  167544 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:56:18.590773  167544 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:56:18.590777  167544 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:56:18.590850  167544 ssh_runner.go:195] Run: crio --version
	I0916 10:56:18.622340  167544 command_runner.go:130] > crio version 1.24.6
	I0916 10:56:18.622362  167544 command_runner.go:130] > Version:          1.24.6
	I0916 10:56:18.622369  167544 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:56:18.622373  167544 command_runner.go:130] > GitTreeState:     clean
	I0916 10:56:18.622385  167544 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:56:18.622389  167544 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:56:18.622393  167544 command_runner.go:130] > Compiler:         gc
	I0916 10:56:18.622399  167544 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:56:18.622408  167544 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:56:18.622418  167544 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:56:18.622429  167544 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:56:18.622437  167544 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:56:18.626027  167544 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:56:18.627412  167544 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:18.644032  167544 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:56:18.647900  167544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:18.658261  167544 kubeadm.go:883] updating cluster {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:56:18.658394  167544 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:56:18.658436  167544 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:56:18.696778  167544 command_runner.go:130] > {
	I0916 10:56:18.696810  167544 command_runner.go:130] >   "images": [
	I0916 10:56:18.696817  167544 command_runner.go:130] >     {
	I0916 10:56:18.696826  167544 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:56:18.696832  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.696843  167544 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:56:18.696847  167544 command_runner.go:130] >       ],
	I0916 10:56:18.696852  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.696863  167544 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:56:18.696879  167544 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:56:18.696896  167544 command_runner.go:130] >       ],
	I0916 10:56:18.696908  167544 command_runner.go:130] >       "size": "87190579",
	I0916 10:56:18.696916  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.696921  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.696929  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.696936  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.696940  167544 command_runner.go:130] >     },
	I0916 10:56:18.696947  167544 command_runner.go:130] >     {
	I0916 10:56:18.696958  167544 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 10:56:18.696969  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.696982  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 10:56:18.696992  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697003  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.697017  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 10:56:18.697028  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 10:56:18.697036  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697043  167544 command_runner.go:130] >       "size": "1363676",
	I0916 10:56:18.697054  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.697068  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.697084  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.697097  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.697107  167544 command_runner.go:130] >     },
	I0916 10:56:18.697116  167544 command_runner.go:130] >     {
	I0916 10:56:18.697125  167544 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:56:18.697135  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.697149  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:56:18.697159  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697171  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.697188  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:56:18.697202  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:56:18.697213  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697224  167544 command_runner.go:130] >       "size": "31470524",
	I0916 10:56:18.697236  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.697247  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.697258  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.697269  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.697279  167544 command_runner.go:130] >     },
	I0916 10:56:18.697287  167544 command_runner.go:130] >     {
	I0916 10:56:18.697296  167544 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:56:18.697307  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.697320  167544 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:56:18.697330  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697371  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.697385  167544 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:56:18.697404  167544 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:56:18.697413  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697418  167544 command_runner.go:130] >       "size": "63273227",
	I0916 10:56:18.697429  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.697440  167544 command_runner.go:130] >       "username": "nonroot",
	I0916 10:56:18.697457  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.697468  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.697475  167544 command_runner.go:130] >     },
	I0916 10:56:18.697484  167544 command_runner.go:130] >     {
	I0916 10:56:18.697497  167544 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:56:18.697506  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.697518  167544 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:56:18.697529  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697540  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.697562  167544 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:56:18.697578  167544 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:56:18.697587  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697594  167544 command_runner.go:130] >       "size": "149009664",
	I0916 10:56:18.697604  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.697616  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.697626  167544 command_runner.go:130] >       },
	I0916 10:56:18.697638  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.697645  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.697653  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.697663  167544 command_runner.go:130] >     },
	I0916 10:56:18.697672  167544 command_runner.go:130] >     {
	I0916 10:56:18.697680  167544 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:56:18.697691  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.697735  167544 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:56:18.697747  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697754  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.697763  167544 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:56:18.697779  167544 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:56:18.697790  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697802  167544 command_runner.go:130] >       "size": "95237600",
	I0916 10:56:18.697813  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.697827  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.697838  167544 command_runner.go:130] >       },
	I0916 10:56:18.697844  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.697854  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.697866  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.697876  167544 command_runner.go:130] >     },
	I0916 10:56:18.697886  167544 command_runner.go:130] >     {
	I0916 10:56:18.697901  167544 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:56:18.697914  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.697926  167544 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:56:18.697932  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697937  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.697950  167544 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:56:18.697964  167544 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:56:18.697972  167544 command_runner.go:130] >       ],
	I0916 10:56:18.697984  167544 command_runner.go:130] >       "size": "89437508",
	I0916 10:56:18.697992  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.698000  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.698010  167544 command_runner.go:130] >       },
	I0916 10:56:18.698019  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.698029  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.698040  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.698051  167544 command_runner.go:130] >     },
	I0916 10:56:18.698060  167544 command_runner.go:130] >     {
	I0916 10:56:18.698075  167544 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:56:18.698086  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.698096  167544 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:56:18.698104  167544 command_runner.go:130] >       ],
	I0916 10:56:18.698109  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.698134  167544 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:56:18.698151  167544 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:56:18.698162  167544 command_runner.go:130] >       ],
	I0916 10:56:18.698173  167544 command_runner.go:130] >       "size": "92733849",
	I0916 10:56:18.698180  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.698189  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.698198  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.698207  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.698216  167544 command_runner.go:130] >     },
	I0916 10:56:18.698226  167544 command_runner.go:130] >     {
	I0916 10:56:18.698240  167544 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:56:18.698251  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.698263  167544 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:56:18.698270  167544 command_runner.go:130] >       ],
	I0916 10:56:18.698277  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.698285  167544 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:56:18.698298  167544 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:56:18.698315  167544 command_runner.go:130] >       ],
	I0916 10:56:18.698329  167544 command_runner.go:130] >       "size": "68420934",
	I0916 10:56:18.698340  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.698351  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.698360  167544 command_runner.go:130] >       },
	I0916 10:56:18.698366  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.698375  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.698382  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.698392  167544 command_runner.go:130] >     },
	I0916 10:56:18.698403  167544 command_runner.go:130] >     {
	I0916 10:56:18.698418  167544 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:56:18.698429  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.698441  167544 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:56:18.698451  167544 command_runner.go:130] >       ],
	I0916 10:56:18.698461  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.698477  167544 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:56:18.698493  167544 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:56:18.698510  167544 command_runner.go:130] >       ],
	I0916 10:56:18.698522  167544 command_runner.go:130] >       "size": "742080",
	I0916 10:56:18.698532  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.698541  167544 command_runner.go:130] >         "value": "65535"
	I0916 10:56:18.698548  167544 command_runner.go:130] >       },
	I0916 10:56:18.698589  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.698600  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.698612  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.698619  167544 command_runner.go:130] >     }
	I0916 10:56:18.698624  167544 command_runner.go:130] >   ]
	I0916 10:56:18.698633  167544 command_runner.go:130] > }
	I0916 10:56:18.698888  167544 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:56:18.698906  167544 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:56:18.698966  167544 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:56:18.730997  167544 command_runner.go:130] > {
	I0916 10:56:18.731021  167544 command_runner.go:130] >   "images": [
	I0916 10:56:18.731026  167544 command_runner.go:130] >     {
	I0916 10:56:18.731034  167544 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:56:18.731039  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731045  167544 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:56:18.731048  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731052  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731061  167544 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:56:18.731068  167544 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:56:18.731073  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731078  167544 command_runner.go:130] >       "size": "87190579",
	I0916 10:56:18.731084  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.731089  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731100  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731107  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731111  167544 command_runner.go:130] >     },
	I0916 10:56:18.731117  167544 command_runner.go:130] >     {
	I0916 10:56:18.731123  167544 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 10:56:18.731137  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731145  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 10:56:18.731150  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731155  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731164  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 10:56:18.731174  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 10:56:18.731180  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731184  167544 command_runner.go:130] >       "size": "1363676",
	I0916 10:56:18.731188  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.731194  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731197  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731201  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731204  167544 command_runner.go:130] >     },
	I0916 10:56:18.731207  167544 command_runner.go:130] >     {
	I0916 10:56:18.731213  167544 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:56:18.731217  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731222  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:56:18.731228  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731232  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731242  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:56:18.731252  167544 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:56:18.731257  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731264  167544 command_runner.go:130] >       "size": "31470524",
	I0916 10:56:18.731271  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.731275  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731281  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731285  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731290  167544 command_runner.go:130] >     },
	I0916 10:56:18.731294  167544 command_runner.go:130] >     {
	I0916 10:56:18.731302  167544 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:56:18.731308  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731313  167544 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:56:18.731317  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731323  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731330  167544 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:56:18.731342  167544 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:56:18.731348  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731353  167544 command_runner.go:130] >       "size": "63273227",
	I0916 10:56:18.731359  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.731364  167544 command_runner.go:130] >       "username": "nonroot",
	I0916 10:56:18.731369  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731373  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731379  167544 command_runner.go:130] >     },
	I0916 10:56:18.731382  167544 command_runner.go:130] >     {
	I0916 10:56:18.731393  167544 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:56:18.731399  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731404  167544 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:56:18.731411  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731416  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731425  167544 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:56:18.731434  167544 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:56:18.731440  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731444  167544 command_runner.go:130] >       "size": "149009664",
	I0916 10:56:18.731450  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.731454  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.731460  167544 command_runner.go:130] >       },
	I0916 10:56:18.731464  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731470  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731474  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731477  167544 command_runner.go:130] >     },
	I0916 10:56:18.731481  167544 command_runner.go:130] >     {
	I0916 10:56:18.731487  167544 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:56:18.731494  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731499  167544 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:56:18.731504  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731509  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731517  167544 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:56:18.731527  167544 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:56:18.731532  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731536  167544 command_runner.go:130] >       "size": "95237600",
	I0916 10:56:18.731553  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.731560  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.731563  167544 command_runner.go:130] >       },
	I0916 10:56:18.731567  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731573  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731577  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731584  167544 command_runner.go:130] >     },
	I0916 10:56:18.731592  167544 command_runner.go:130] >     {
	I0916 10:56:18.731600  167544 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:56:18.731608  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731614  167544 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:56:18.731620  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731624  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731633  167544 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:56:18.731643  167544 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:56:18.731649  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731653  167544 command_runner.go:130] >       "size": "89437508",
	I0916 10:56:18.731658  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.731662  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.731666  167544 command_runner.go:130] >       },
	I0916 10:56:18.731673  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731678  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731682  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731688  167544 command_runner.go:130] >     },
	I0916 10:56:18.731691  167544 command_runner.go:130] >     {
	I0916 10:56:18.731699  167544 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:56:18.731705  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731711  167544 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:56:18.731716  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731721  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731735  167544 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:56:18.731745  167544 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:56:18.731750  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731756  167544 command_runner.go:130] >       "size": "92733849",
	I0916 10:56:18.731762  167544 command_runner.go:130] >       "uid": null,
	I0916 10:56:18.731766  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731772  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731776  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731782  167544 command_runner.go:130] >     },
	I0916 10:56:18.731786  167544 command_runner.go:130] >     {
	I0916 10:56:18.731794  167544 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:56:18.731801  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731805  167544 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:56:18.731811  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731815  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731825  167544 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:56:18.731844  167544 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:56:18.731851  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731855  167544 command_runner.go:130] >       "size": "68420934",
	I0916 10:56:18.731858  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.731862  167544 command_runner.go:130] >         "value": "0"
	I0916 10:56:18.731865  167544 command_runner.go:130] >       },
	I0916 10:56:18.731871  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731875  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731879  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731885  167544 command_runner.go:130] >     },
	I0916 10:56:18.731888  167544 command_runner.go:130] >     {
	I0916 10:56:18.731895  167544 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:56:18.731902  167544 command_runner.go:130] >       "repoTags": [
	I0916 10:56:18.731907  167544 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:56:18.731913  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731917  167544 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:18.731926  167544 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:56:18.731935  167544 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:56:18.731941  167544 command_runner.go:130] >       ],
	I0916 10:56:18.731947  167544 command_runner.go:130] >       "size": "742080",
	I0916 10:56:18.731956  167544 command_runner.go:130] >       "uid": {
	I0916 10:56:18.731960  167544 command_runner.go:130] >         "value": "65535"
	I0916 10:56:18.731967  167544 command_runner.go:130] >       },
	I0916 10:56:18.731973  167544 command_runner.go:130] >       "username": "",
	I0916 10:56:18.731979  167544 command_runner.go:130] >       "spec": null,
	I0916 10:56:18.731984  167544 command_runner.go:130] >       "pinned": false
	I0916 10:56:18.731989  167544 command_runner.go:130] >     }
	I0916 10:56:18.731993  167544 command_runner.go:130] >   ]
	I0916 10:56:18.731999  167544 command_runner.go:130] > }
	I0916 10:56:18.732108  167544 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:56:18.732119  167544 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:56:18.732126  167544 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 crio true true} ...
	I0916 10:56:18.732216  167544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:56:18.732273  167544 ssh_runner.go:195] Run: crio config
	I0916 10:56:18.769207  167544 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:56:18.769237  167544 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:56:18.769247  167544 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:56:18.769257  167544 command_runner.go:130] > #
	I0916 10:56:18.769268  167544 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:56:18.769277  167544 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:56:18.769286  167544 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:56:18.769298  167544 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:56:18.769309  167544 command_runner.go:130] > # reload'.
	I0916 10:56:18.769319  167544 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:56:18.769345  167544 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:56:18.769358  167544 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:56:18.769369  167544 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:56:18.769378  167544 command_runner.go:130] > [crio]
	I0916 10:56:18.769387  167544 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:56:18.769399  167544 command_runner.go:130] > # containers images, in this directory.
	I0916 10:56:18.769412  167544 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0916 10:56:18.769425  167544 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:56:18.769433  167544 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0916 10:56:18.769452  167544 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:56:18.769465  167544 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:56:18.769474  167544 command_runner.go:130] > # storage_driver = "vfs"
	I0916 10:56:18.769487  167544 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:56:18.769496  167544 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:56:18.769506  167544 command_runner.go:130] > # storage_option = [
	I0916 10:56:18.769512  167544 command_runner.go:130] > # ]
	I0916 10:56:18.769523  167544 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:56:18.769552  167544 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:56:18.769568  167544 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:56:18.769577  167544 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:56:18.769586  167544 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:56:18.769594  167544 command_runner.go:130] > # always happen on a node reboot
	I0916 10:56:18.769601  167544 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:56:18.769610  167544 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:56:18.769621  167544 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:56:18.769633  167544 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:56:18.769646  167544 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0916 10:56:18.769660  167544 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:56:18.769683  167544 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:56:18.769692  167544 command_runner.go:130] > # internal_wipe = true
	I0916 10:56:18.769700  167544 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:56:18.769712  167544 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:56:18.769725  167544 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:56:18.769736  167544 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:56:18.769748  167544 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:56:18.769754  167544 command_runner.go:130] > [crio.api]
	I0916 10:56:18.769763  167544 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:56:18.769771  167544 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:56:18.769779  167544 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:56:18.769784  167544 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:56:18.769797  167544 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:56:18.769807  167544 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:56:18.769816  167544 command_runner.go:130] > # stream_port = "0"
	I0916 10:56:18.769824  167544 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:56:18.769834  167544 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:56:18.769844  167544 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:56:18.769859  167544 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:56:18.769874  167544 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:56:18.769887  167544 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:56:18.769894  167544 command_runner.go:130] > # minutes.
	I0916 10:56:18.769902  167544 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:56:18.769912  167544 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:56:18.769924  167544 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:56:18.769934  167544 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:56:18.769942  167544 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:56:18.769956  167544 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:56:18.769965  167544 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:56:18.769975  167544 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:56:18.769988  167544 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:56:18.770000  167544 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0916 10:56:18.770014  167544 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:56:18.770023  167544 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0916 10:56:18.770039  167544 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:56:18.770050  167544 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:56:18.770059  167544 command_runner.go:130] > [crio.runtime]
	I0916 10:56:18.770069  167544 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:56:18.770080  167544 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:56:18.770087  167544 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:56:18.770096  167544 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:56:18.770106  167544 command_runner.go:130] > # default_ulimits = [
	I0916 10:56:18.770111  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770121  167544 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:56:18.770131  167544 command_runner.go:130] > # no_pivot = false
	I0916 10:56:18.770140  167544 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:56:18.770152  167544 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:56:18.770167  167544 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:56:18.770181  167544 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:56:18.770192  167544 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:56:18.770203  167544 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:56:18.770212  167544 command_runner.go:130] > # conmon = ""
	I0916 10:56:18.770219  167544 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:56:18.770230  167544 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:56:18.770236  167544 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:56:18.770246  167544 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:56:18.770253  167544 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:56:18.770272  167544 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:56:18.770283  167544 command_runner.go:130] > # conmon_env = [
	I0916 10:56:18.770288  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770296  167544 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:56:18.770304  167544 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:56:18.770316  167544 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:56:18.770326  167544 command_runner.go:130] > # default_env = [
	I0916 10:56:18.770334  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770344  167544 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:56:18.770351  167544 command_runner.go:130] > # selinux = false
	I0916 10:56:18.770364  167544 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:56:18.770375  167544 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:56:18.770387  167544 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:56:18.770395  167544 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:56:18.770403  167544 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:56:18.770415  167544 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:56:18.770425  167544 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:56:18.770435  167544 command_runner.go:130] > # which might increase security.
	I0916 10:56:18.770443  167544 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0916 10:56:18.770455  167544 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:56:18.770464  167544 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:56:18.770473  167544 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:56:18.770486  167544 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:56:18.770496  167544 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:56:18.770504  167544 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:56:18.770516  167544 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:56:18.770528  167544 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:56:18.770534  167544 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:56:18.770545  167544 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:56:18.770554  167544 command_runner.go:130] > # irqbalance daemon.
	I0916 10:56:18.770561  167544 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:56:18.770574  167544 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:56:18.770585  167544 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:56:18.770594  167544 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:56:18.770603  167544 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:56:18.770613  167544 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:56:18.770622  167544 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:56:18.770627  167544 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:56:18.770636  167544 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:56:18.770652  167544 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:56:18.770658  167544 command_runner.go:130] > # will be added.
	I0916 10:56:18.770668  167544 command_runner.go:130] > # default_capabilities = [
	I0916 10:56:18.770673  167544 command_runner.go:130] > # 	"CHOWN",
	I0916 10:56:18.770678  167544 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:56:18.770682  167544 command_runner.go:130] > # 	"FSETID",
	I0916 10:56:18.770688  167544 command_runner.go:130] > # 	"FOWNER",
	I0916 10:56:18.770693  167544 command_runner.go:130] > # 	"SETGID",
	I0916 10:56:18.770698  167544 command_runner.go:130] > # 	"SETUID",
	I0916 10:56:18.770704  167544 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:56:18.770708  167544 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:56:18.770713  167544 command_runner.go:130] > # 	"KILL",
	I0916 10:56:18.770716  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770727  167544 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:56:18.770735  167544 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:56:18.770741  167544 command_runner.go:130] > # add_inheritable_capabilities = true
	I0916 10:56:18.770750  167544 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:56:18.770758  167544 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:56:18.770765  167544 command_runner.go:130] > default_sysctls = [
	I0916 10:56:18.770772  167544 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:56:18.770777  167544 command_runner.go:130] > ]
	I0916 10:56:18.770785  167544 command_runner.go:130] > # List of devices on the host that a
	I0916 10:56:18.770794  167544 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:56:18.770800  167544 command_runner.go:130] > # allowed_devices = [
	I0916 10:56:18.770806  167544 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:56:18.770810  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770817  167544 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:56:18.770834  167544 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:56:18.770839  167544 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:56:18.770844  167544 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:56:18.770848  167544 command_runner.go:130] > # additional_devices = [
	I0916 10:56:18.770852  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770857  167544 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:56:18.770860  167544 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:56:18.770864  167544 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:56:18.770868  167544 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:56:18.770871  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770876  167544 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:56:18.770882  167544 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:56:18.770887  167544 command_runner.go:130] > # Defaults to false.
	I0916 10:56:18.770892  167544 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:56:18.770898  167544 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:56:18.770904  167544 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:56:18.770910  167544 command_runner.go:130] > # hooks_dir = [
	I0916 10:56:18.770916  167544 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:56:18.770920  167544 command_runner.go:130] > # ]
	I0916 10:56:18.770926  167544 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:56:18.770931  167544 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:56:18.770936  167544 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:56:18.770939  167544 command_runner.go:130] > #
	I0916 10:56:18.770945  167544 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:56:18.770951  167544 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:56:18.770956  167544 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:56:18.770959  167544 command_runner.go:130] > #
	I0916 10:56:18.770965  167544 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:56:18.770971  167544 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:56:18.770977  167544 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:56:18.770981  167544 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:56:18.770984  167544 command_runner.go:130] > #
	I0916 10:56:18.770988  167544 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:56:18.770993  167544 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:56:18.771010  167544 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:56:18.771017  167544 command_runner.go:130] > # pids_limit = 0
	I0916 10:56:18.771023  167544 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:56:18.771029  167544 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:56:18.771034  167544 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:56:18.771042  167544 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:56:18.771046  167544 command_runner.go:130] > # log_size_max = -1
	I0916 10:56:18.771053  167544 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0916 10:56:18.771057  167544 command_runner.go:130] > # log_to_journald = false
	I0916 10:56:18.771062  167544 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:56:18.771067  167544 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:56:18.771072  167544 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:56:18.771076  167544 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:56:18.771081  167544 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:56:18.771084  167544 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:56:18.771091  167544 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:56:18.771094  167544 command_runner.go:130] > # read_only = false
	I0916 10:56:18.771100  167544 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:56:18.771106  167544 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:56:18.771110  167544 command_runner.go:130] > # live configuration reload.
	I0916 10:56:18.771116  167544 command_runner.go:130] > # log_level = "info"
	I0916 10:56:18.771121  167544 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:56:18.771126  167544 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:56:18.771130  167544 command_runner.go:130] > # log_filter = ""
	I0916 10:56:18.771135  167544 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:56:18.771141  167544 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:56:18.771145  167544 command_runner.go:130] > # separated by comma.
	I0916 10:56:18.771148  167544 command_runner.go:130] > # uid_mappings = ""
	I0916 10:56:18.771154  167544 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:56:18.771162  167544 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:56:18.771166  167544 command_runner.go:130] > # separated by comma.
	I0916 10:56:18.771169  167544 command_runner.go:130] > # gid_mappings = ""
	I0916 10:56:18.771175  167544 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:56:18.771180  167544 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:56:18.771186  167544 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:56:18.771190  167544 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:56:18.771195  167544 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:56:18.771201  167544 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:56:18.771206  167544 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:56:18.771211  167544 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:56:18.771216  167544 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:56:18.771222  167544 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:56:18.771227  167544 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:56:18.771230  167544 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:56:18.771236  167544 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:56:18.771241  167544 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:56:18.771246  167544 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:56:18.771250  167544 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:56:18.771254  167544 command_runner.go:130] > # drop_infra_ctr = true
	I0916 10:56:18.771259  167544 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:56:18.771264  167544 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:56:18.771271  167544 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:56:18.771278  167544 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:56:18.771283  167544 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:56:18.771287  167544 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:56:18.771291  167544 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:56:18.771298  167544 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:56:18.771303  167544 command_runner.go:130] > # pinns_path = ""
	I0916 10:56:18.771309  167544 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:56:18.771314  167544 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0916 10:56:18.771319  167544 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0916 10:56:18.771324  167544 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:56:18.771328  167544 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:56:18.771335  167544 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:56:18.771344  167544 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0916 10:56:18.771348  167544 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:56:18.771355  167544 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:56:18.771360  167544 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:56:18.771364  167544 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:56:18.771367  167544 command_runner.go:130] > # ]
	I0916 10:56:18.771373  167544 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:56:18.771379  167544 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:56:18.771386  167544 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0916 10:56:18.771392  167544 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0916 10:56:18.771395  167544 command_runner.go:130] > #
	I0916 10:56:18.771399  167544 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0916 10:56:18.771404  167544 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0916 10:56:18.771407  167544 command_runner.go:130] > #  runtime_type = "oci"
	I0916 10:56:18.771412  167544 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0916 10:56:18.771416  167544 command_runner.go:130] > #  privileged_without_host_devices = false
	I0916 10:56:18.771420  167544 command_runner.go:130] > #  allowed_annotations = []
	I0916 10:56:18.771423  167544 command_runner.go:130] > # Where:
	I0916 10:56:18.771428  167544 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0916 10:56:18.771434  167544 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0916 10:56:18.771439  167544 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:56:18.771445  167544 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:56:18.771448  167544 command_runner.go:130] > #   in $PATH.
	I0916 10:56:18.771454  167544 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0916 10:56:18.771459  167544 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:56:18.771466  167544 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0916 10:56:18.771470  167544 command_runner.go:130] > #   state.
	I0916 10:56:18.771476  167544 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:56:18.771481  167544 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:56:18.771487  167544 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:56:18.771494  167544 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:56:18.771500  167544 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:56:18.771505  167544 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:56:18.771509  167544 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:56:18.771515  167544 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:56:18.771523  167544 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:56:18.771528  167544 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:56:18.771534  167544 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:56:18.771542  167544 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:56:18.771550  167544 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:56:18.771556  167544 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:56:18.771563  167544 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0916 10:56:18.771567  167544 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:56:18.771571  167544 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:56:18.771576  167544 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0916 10:56:18.771582  167544 command_runner.go:130] > runtime_type = "oci"
	I0916 10:56:18.771586  167544 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:56:18.771590  167544 command_runner.go:130] > runtime_config_path = ""
	I0916 10:56:18.771593  167544 command_runner.go:130] > monitor_path = ""
	I0916 10:56:18.771597  167544 command_runner.go:130] > monitor_cgroup = ""
	I0916 10:56:18.771600  167544 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:56:18.771624  167544 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0916 10:56:18.771629  167544 command_runner.go:130] > # running containers
	I0916 10:56:18.771633  167544 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0916 10:56:18.771638  167544 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0916 10:56:18.771644  167544 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0916 10:56:18.771650  167544 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0916 10:56:18.771654  167544 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0916 10:56:18.771658  167544 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0916 10:56:18.771668  167544 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0916 10:56:18.771674  167544 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0916 10:56:18.771678  167544 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0916 10:56:18.771683  167544 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0916 10:56:18.771689  167544 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:56:18.771695  167544 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:56:18.771703  167544 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:56:18.771717  167544 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:56:18.771725  167544 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:56:18.771731  167544 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:56:18.771739  167544 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:56:18.771747  167544 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:56:18.771752  167544 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:56:18.771758  167544 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:56:18.771762  167544 command_runner.go:130] > # Example:
	I0916 10:56:18.771767  167544 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:56:18.771771  167544 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:56:18.771776  167544 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:56:18.771781  167544 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:56:18.771784  167544 command_runner.go:130] > # cpuset = 0
	I0916 10:56:18.771788  167544 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:56:18.771791  167544 command_runner.go:130] > # Where:
	I0916 10:56:18.771795  167544 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:56:18.771801  167544 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:56:18.771807  167544 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:56:18.771812  167544 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:56:18.771819  167544 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:56:18.771825  167544 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:56:18.771828  167544 command_runner.go:130] > # 
	I0916 10:56:18.771834  167544 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:56:18.771837  167544 command_runner.go:130] > #
	I0916 10:56:18.771843  167544 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:56:18.771848  167544 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:56:18.771854  167544 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:56:18.771860  167544 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:56:18.771867  167544 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:56:18.771870  167544 command_runner.go:130] > [crio.image]
	I0916 10:56:18.771876  167544 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:56:18.771888  167544 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:56:18.771894  167544 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:56:18.771900  167544 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:56:18.771904  167544 command_runner.go:130] > # global_auth_file = ""
	I0916 10:56:18.771912  167544 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:56:18.771922  167544 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:56:18.771927  167544 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:56:18.771939  167544 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:56:18.771946  167544 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:56:18.771951  167544 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:56:18.771956  167544 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:56:18.771970  167544 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:56:18.771976  167544 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:56:18.771981  167544 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:56:18.771987  167544 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:56:18.771991  167544 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:56:18.772000  167544 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:56:18.772012  167544 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:56:18.772021  167544 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:56:18.772030  167544 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:56:18.772038  167544 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:56:18.772047  167544 command_runner.go:130] > # signature_policy = ""
	I0916 10:56:18.772056  167544 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:56:18.772068  167544 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:56:18.772074  167544 command_runner.go:130] > # changing them here.
	I0916 10:56:18.772083  167544 command_runner.go:130] > # insecure_registries = [
	I0916 10:56:18.772091  167544 command_runner.go:130] > # ]
	I0916 10:56:18.772101  167544 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:56:18.772110  167544 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:56:18.772116  167544 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:56:18.772124  167544 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:56:18.772128  167544 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:56:18.772134  167544 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:56:18.772140  167544 command_runner.go:130] > # CNI plugins.
	I0916 10:56:18.772144  167544 command_runner.go:130] > [crio.network]
	I0916 10:56:18.772150  167544 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:56:18.772155  167544 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:56:18.772162  167544 command_runner.go:130] > # cni_default_network = ""
	I0916 10:56:18.772168  167544 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:56:18.772172  167544 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:56:18.772178  167544 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:56:18.772184  167544 command_runner.go:130] > # plugin_dirs = [
	I0916 10:56:18.772189  167544 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:56:18.772197  167544 command_runner.go:130] > # ]
	I0916 10:56:18.772205  167544 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:56:18.772208  167544 command_runner.go:130] > [crio.metrics]
	I0916 10:56:18.772213  167544 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:56:18.772218  167544 command_runner.go:130] > # enable_metrics = false
	I0916 10:56:18.772223  167544 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:56:18.772229  167544 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:56:18.772235  167544 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:56:18.772244  167544 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:56:18.772249  167544 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:56:18.772255  167544 command_runner.go:130] > # metrics_collectors = [
	I0916 10:56:18.772259  167544 command_runner.go:130] > # 	"operations",
	I0916 10:56:18.772266  167544 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:56:18.772270  167544 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:56:18.772277  167544 command_runner.go:130] > # 	"operations_errors",
	I0916 10:56:18.772281  167544 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:56:18.772288  167544 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:56:18.772292  167544 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:56:18.772298  167544 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:56:18.772301  167544 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:56:18.772305  167544 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:56:18.772312  167544 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:56:18.772316  167544 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:56:18.772322  167544 command_runner.go:130] > # 	"containers_oom",
	I0916 10:56:18.772326  167544 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:56:18.772332  167544 command_runner.go:130] > # 	"operations_total",
	I0916 10:56:18.772336  167544 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:56:18.772342  167544 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:56:18.772347  167544 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:56:18.772353  167544 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:56:18.772358  167544 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:56:18.772365  167544 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:56:18.772369  167544 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:56:18.772376  167544 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:56:18.772381  167544 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:56:18.772387  167544 command_runner.go:130] > # ]
	I0916 10:56:18.772392  167544 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:56:18.772398  167544 command_runner.go:130] > # metrics_port = 9090
	I0916 10:56:18.772403  167544 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:56:18.772409  167544 command_runner.go:130] > # metrics_socket = ""
	I0916 10:56:18.772414  167544 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:56:18.772422  167544 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:56:18.772430  167544 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:56:18.772434  167544 command_runner.go:130] > # certificate on any modification event.
	I0916 10:56:18.772440  167544 command_runner.go:130] > # metrics_cert = ""
	I0916 10:56:18.772447  167544 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:56:18.772455  167544 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:56:18.772459  167544 command_runner.go:130] > # metrics_key = ""
	I0916 10:56:18.772465  167544 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:56:18.772470  167544 command_runner.go:130] > [crio.tracing]
	I0916 10:56:18.772476  167544 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:56:18.772482  167544 command_runner.go:130] > # enable_tracing = false
	I0916 10:56:18.772488  167544 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:56:18.772494  167544 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:56:18.772498  167544 command_runner.go:130] > # Number of samples to collect per million spans.
	I0916 10:56:18.772505  167544 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:56:18.772511  167544 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:56:18.772517  167544 command_runner.go:130] > [crio.stats]
	I0916 10:56:18.772522  167544 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:56:18.772529  167544 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:56:18.772534  167544 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:56:18.772553  167544 command_runner.go:130] ! time="2024-09-16 10:56:18.767026414Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0916 10:56:18.772566  167544 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:56:18.772615  167544 cni.go:84] Creating CNI manager for ""
	I0916 10:56:18.772627  167544 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:56:18.772636  167544 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:56:18.772656  167544 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-026168 NodeName:multinode-026168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:56:18.772796  167544 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-026168"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:56:18.772850  167544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:56:18.780294  167544 command_runner.go:130] > kubeadm
	I0916 10:56:18.780313  167544 command_runner.go:130] > kubectl
	I0916 10:56:18.780319  167544 command_runner.go:130] > kubelet
	I0916 10:56:18.781027  167544 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:56:18.781088  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:56:18.789105  167544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0916 10:56:18.804761  167544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:56:18.820790  167544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:56:18.837776  167544 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:56:18.840971  167544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:18.851333  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:18.922056  167544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:18.934468  167544 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.2
	I0916 10:56:18.934493  167544 certs.go:194] generating shared ca certs ...
	I0916 10:56:18.934512  167544 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:18.934646  167544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:56:18.934692  167544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:56:18.934702  167544 certs.go:256] generating profile certs ...
	I0916 10:56:18.934771  167544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key
	I0916 10:56:18.934824  167544 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66
	I0916 10:56:18.934870  167544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key
	I0916 10:56:18.934882  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:56:18.934893  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:56:18.934903  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:56:18.934922  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:56:18.934932  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:56:18.934949  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:56:18.934963  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:56:18.934974  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:56:18.935021  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:56:18.935047  167544 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:56:18.935056  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:56:18.935078  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:56:18.935099  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:56:18.935119  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:56:18.935154  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:56:18.935178  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:56:18.935191  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:18.935202  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:56:18.935706  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:56:18.959424  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:56:18.982805  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:56:19.009907  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:56:19.033877  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:56:19.055139  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:56:19.109810  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:56:19.134029  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:56:19.155325  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:56:19.177084  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:56:19.198840  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:56:19.220351  167544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:56:19.235791  167544 ssh_runner.go:195] Run: openssl version
	I0916 10:56:19.240480  167544 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:56:19.240542  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:56:19.248588  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:56:19.251506  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:56:19.251542  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:56:19.251576  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:56:19.257356  167544 command_runner.go:130] > 3ec20f2e
	I0916 10:56:19.257646  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:56:19.265475  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:56:19.273846  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:19.276788  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:19.276818  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:19.276861  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:19.282745  167544 command_runner.go:130] > b5213941
	I0916 10:56:19.282810  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:56:19.290582  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:56:19.298674  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:56:19.301683  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:56:19.301719  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:56:19.301767  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:56:19.307875  167544 command_runner.go:130] > 51391683
	I0916 10:56:19.307928  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:56:19.316006  167544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:56:19.319044  167544 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:56:19.319067  167544 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:56:19.319076  167544 command_runner.go:130] > Device: 801h/2049d	Inode: 1050903     Links: 1
	I0916 10:56:19.319085  167544 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:19.319094  167544 command_runner.go:130] > Access: 2024-09-16 10:53:26.655075181 +0000
	I0916 10:56:19.319104  167544 command_runner.go:130] > Modify: 2024-09-16 10:53:26.655075181 +0000
	I0916 10:56:19.319115  167544 command_runner.go:130] > Change: 2024-09-16 10:53:26.655075181 +0000
	I0916 10:56:19.319124  167544 command_runner.go:130] >  Birth: 2024-09-16 10:53:26.655075181 +0000
	I0916 10:56:19.319176  167544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:56:19.325162  167544 command_runner.go:130] > Certificate will not expire
	I0916 10:56:19.325221  167544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:56:19.331128  167544 command_runner.go:130] > Certificate will not expire
	I0916 10:56:19.331342  167544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:56:19.336860  167544 command_runner.go:130] > Certificate will not expire
	I0916 10:56:19.337091  167544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:56:19.342771  167544 command_runner.go:130] > Certificate will not expire
	I0916 10:56:19.342943  167544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:56:19.348837  167544 command_runner.go:130] > Certificate will not expire
	I0916 10:56:19.348920  167544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:56:19.354738  167544 command_runner.go:130] > Certificate will not expire
	I0916 10:56:19.354988  167544 kubeadm.go:392] StartCluster: {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:19.355108  167544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:56:19.355165  167544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:56:19.388602  167544 cri.go:89] found id: ""
	I0916 10:56:19.388696  167544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:56:19.397493  167544 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 10:56:19.397513  167544 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 10:56:19.397520  167544 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 10:56:19.397523  167544 command_runner.go:130] > member
	I0916 10:56:19.397542  167544 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:56:19.397555  167544 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:56:19.397609  167544 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:56:19.405194  167544 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:56:19.405625  167544 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-026168" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:56:19.405737  167544 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-026168" cluster setting kubeconfig missing "multinode-026168" context setting]
	I0916 10:56:19.406007  167544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:19.406346  167544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:56:19.406582  167544 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:19.406997  167544 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:56:19.407165  167544 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:56:19.414776  167544 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.67.2
	I0916 10:56:19.414804  167544 kubeadm.go:597] duration metric: took 17.24425ms to restartPrimaryControlPlane
	I0916 10:56:19.414811  167544 kubeadm.go:394] duration metric: took 59.831168ms to StartCluster
	I0916 10:56:19.414831  167544 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:19.414884  167544 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:56:19.415361  167544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:19.415544  167544 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:56:19.415613  167544 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:56:19.415794  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:56:19.418300  167544 out.go:177] * Verifying Kubernetes components...
	I0916 10:56:19.418306  167544 out.go:177] * Enabled addons: 
	I0916 10:56:19.419904  167544 addons.go:510] duration metric: took 4.293154ms for enable addons: enabled=[]
	I0916 10:56:19.419954  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:19.527525  167544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:19.595932  167544 node_ready.go:35] waiting up to 6m0s for node "multinode-026168" to be "Ready" ...
	I0916 10:56:19.596064  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:19.596075  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:19.596086  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:19.596094  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:19.596321  167544 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:56:19.596335  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:20.097063  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:20.097088  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:20.097100  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:20.097110  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.205786  167544 round_trippers.go:574] Response Status: 200 OK in 3108 milliseconds
	I0916 10:56:23.205831  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.205840  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:56:23.205847  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.205852  167544 round_trippers.go:580]     Audit-Id: b35e70fa-e82b-43fb-985f-c918a5a27d1c
	I0916 10:56:23.205856  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.205861  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.205865  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:56:23.211574  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"539","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6232 chars]
	I0916 10:56:23.212477  167544 node_ready.go:49] node "multinode-026168" has status "Ready":"True"
	I0916 10:56:23.212506  167544 node_ready.go:38] duration metric: took 3.616540556s for node "multinode-026168" to be "Ready" ...
	I0916 10:56:23.212519  167544 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:23.212572  167544 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:56:23.212601  167544 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:56:23.212685  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:23.212694  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.212702  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.212706  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.217861  167544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:56:23.217889  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.217900  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:56:23.217906  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.217912  167544 round_trippers.go:580]     Audit-Id: 256579ea-42f1-4fa5-b0f4-8f7b60e0a8f9
	I0916 10:56:23.217917  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.217921  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.217924  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:56:23.218811  167544 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"639"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88790 chars]
	I0916 10:56:23.223628  167544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.223783  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:23.223799  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.223809  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.223822  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.226741  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:23.226759  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.226766  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:56:23.226772  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:56:23.226776  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.226779  167544 round_trippers.go:580]     Audit-Id: e6c47528-1963-4308-9300-bba33572f419
	I0916 10:56:23.226783  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.226790  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.226898  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"415","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:56:23.227346  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:23.227361  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.227371  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.227378  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.300855  167544 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0916 10:56:23.300892  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.300902  167544 round_trippers.go:580]     Audit-Id: a8ab9edf-21bf-4f46-93e4-fd7f0e8784fc
	I0916 10:56:23.300907  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.300911  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.300914  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:56:23.300919  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:56:23.300924  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.301512  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"539","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6232 chars]
	I0916 10:56:23.301954  167544 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:23.301984  167544 pod_ready.go:82] duration metric: took 78.312521ms for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.301997  167544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.302084  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:56:23.302097  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.302108  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.302116  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.306754  167544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:56:23.306781  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.306790  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.306796  167544 round_trippers.go:580]     Audit-Id: 4b89fa12-74ca-4be1-ae85-5295471d7857
	I0916 10:56:23.306800  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.306804  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.306808  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.306812  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.307460  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"382","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6435 chars]
	I0916 10:56:23.308009  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:23.308031  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.308041  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.308048  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.315994  167544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:56:23.316022  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.316032  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.316041  167544 round_trippers.go:580]     Audit-Id: 7dfe4d2e-af4f-47b8-8b8f-9782a46ecae2
	I0916 10:56:23.316046  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.316050  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.316056  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.316060  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.317089  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"539","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6232 chars]
	I0916 10:56:23.317551  167544 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:23.317575  167544 pod_ready.go:82] duration metric: took 15.564971ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.317600  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.317681  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:56:23.317693  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.317703  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.317711  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.399637  167544 round_trippers.go:574] Response Status: 200 OK in 81 milliseconds
	I0916 10:56:23.399722  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.399743  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.399758  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.399770  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.399783  167544 round_trippers.go:580]     Audit-Id: 17cd2fd9-ea6f-45b8-842a-d63047862d4b
	I0916 10:56:23.399796  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.399817  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.400001  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"384","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8513 chars]
	I0916 10:56:23.400613  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:23.400665  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.400686  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.400701  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.402394  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:23.402447  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.402460  167544 round_trippers.go:580]     Audit-Id: 30a93ea1-ec18-4e86-87a0-7cec67362a83
	I0916 10:56:23.402465  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.402470  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.402476  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.402480  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.402487  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.402619  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"539","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6232 chars]
	I0916 10:56:23.403032  167544 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:23.403054  167544 pod_ready.go:82] duration metric: took 85.446339ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.403068  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.403151  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:56:23.403166  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.403175  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.403182  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.405500  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:23.405521  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.405530  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.405537  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.405545  167544 round_trippers.go:580]     Audit-Id: a2d35877-8874-4103-96ce-de2df6207ce1
	I0916 10:56:23.405549  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.405554  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.405562  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.405731  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"380","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8088 chars]
	I0916 10:56:23.406250  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:23.406268  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.406275  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.406279  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.409553  167544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:23.409574  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.409583  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.409590  167544 round_trippers.go:580]     Audit-Id: 31e3c760-6d86-4f34-870a-6c4d349f25d4
	I0916 10:56:23.409599  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.409603  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.409608  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.409614  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.409723  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"539","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6232 chars]
	I0916 10:56:23.410114  167544 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:23.410135  167544 pod_ready.go:82] duration metric: took 7.059184ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.410149  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.410227  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:56:23.410239  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.410249  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.410256  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.412353  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:23.412408  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.412433  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.412449  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.412463  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.412476  167544 round_trippers.go:580]     Audit-Id: 11fee976-be5e-41c4-97cc-59df417aeac3
	I0916 10:56:23.412499  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.412514  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.412636  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"348","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:56:23.413203  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:23.413225  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.413236  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.413243  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.414706  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:23.414724  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.414730  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.414733  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.414736  167544 round_trippers.go:580]     Audit-Id: 1f1049d3-80f1-443b-933b-66b99af9fe94
	I0916 10:56:23.414738  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.414741  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.414743  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.414860  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:23.415142  167544 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:23.415181  167544 pod_ready.go:82] duration metric: took 5.021336ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.415199  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.613640  167544 request.go:632] Waited for 198.379762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:56:23.613718  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:56:23.613726  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.613734  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.613741  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.616058  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:23.616082  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.616093  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.616098  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.616102  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.616106  167544 round_trippers.go:580]     Audit-Id: 53d3a325-b2e5-44a2-b798-00d9a0a903bb
	I0916 10:56:23.616109  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.616114  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.616271  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"587","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:56:23.813071  167544 request.go:632] Waited for 196.372385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:56:23.813133  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:56:23.813144  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:23.813152  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:23.813159  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:23.815633  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:23.815657  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:23.815664  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:23.815668  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:23 GMT
	I0916 10:56:23.815672  167544 round_trippers.go:580]     Audit-Id: 557e70d6-e99c-4fd0-8a90-0367fda605e3
	I0916 10:56:23.815674  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:23.815678  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:23.815683  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:23.815810  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"605","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5735 chars]
	I0916 10:56:23.816129  167544 pod_ready.go:93] pod "kube-proxy-g86bs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:23.816146  167544 pod_ready.go:82] duration metric: took 400.934614ms for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:23.816155  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:24.013186  167544 request.go:632] Waited for 196.969464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:56:24.013266  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:56:24.013275  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:24.013287  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:24.013298  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:24.015242  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:24.015267  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:24.015278  167544 round_trippers.go:580]     Audit-Id: d16fd95c-8b19-49e6-a5d0-6730a23ba6a9
	I0916 10:56:24.015284  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:24.015290  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:24.015295  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:24.015299  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:24.015304  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:24 GMT
	I0916 10:56:24.015444  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qds2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac30bd54-b932-4f52-a53c-4edbc5eefc7c","resourceVersion":"475","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:56:24.213294  167544 request.go:632] Waited for 197.391447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:56:24.213403  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:56:24.213416  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:24.213428  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:24.213436  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:24.215154  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:24.215176  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:24.215185  167544 round_trippers.go:580]     Audit-Id: ce0dadc5-b85a-4b5e-b8f1-4fab1ef8aa76
	I0916 10:56:24.215190  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:24.215194  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:24.215198  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:24.215202  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:24.215206  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:24 GMT
	I0916 10:56:24.215323  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"548","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6020 chars]
	I0916 10:56:24.215739  167544 pod_ready.go:93] pod "kube-proxy-qds2d" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:24.215759  167544 pod_ready.go:82] duration metric: took 399.598703ms for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:24.215770  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:24.413468  167544 request.go:632] Waited for 197.604199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:24.413545  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:24.413552  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:24.413563  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:24.413570  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:24.419621  167544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:56:24.419642  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:24.419650  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:24.419653  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:24.419656  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:24.419660  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:24 GMT
	I0916 10:56:24.419662  167544 round_trippers.go:580]     Audit-Id: d8eadd5b-8472-43c5-9cb3-f79d3a3b0397
	I0916 10:56:24.419664  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:24.419952  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:24.612764  167544 request.go:632] Waited for 192.264094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:24.612816  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:24.612821  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:24.612829  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:24.612839  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:24.615845  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:24.615882  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:24.615891  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:24 GMT
	I0916 10:56:24.615899  167544 round_trippers.go:580]     Audit-Id: e3eb0fbc-2ea3-4350-9cf5-3d8e360f7ce2
	I0916 10:56:24.615903  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:24.615907  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:24.615911  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:24.615915  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:24.616057  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:24.813134  167544 request.go:632] Waited for 96.283721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:24.813189  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:24.813194  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:24.813202  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:24.813206  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:24.815468  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:24.815492  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:24.815502  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:24.815506  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:24 GMT
	I0916 10:56:24.815510  167544 round_trippers.go:580]     Audit-Id: 82658f53-67d2-4381-9ed2-cb9378ebbf6c
	I0916 10:56:24.815515  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:24.815520  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:24.815525  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:24.815673  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:25.013427  167544 request.go:632] Waited for 197.356939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:25.013512  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:25.013518  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:25.013525  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:25.013528  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:25.016098  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:25.016124  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:25.016134  167544 round_trippers.go:580]     Audit-Id: d327bb86-2295-41a5-91f7-6b0627927e77
	I0916 10:56:25.016139  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:25.016143  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:25.016148  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:25.016152  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:25.016156  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:25 GMT
	I0916 10:56:25.016291  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:25.216492  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:25.216514  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:25.216521  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:25.216526  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:25.218456  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:25.218483  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:25.218493  167544 round_trippers.go:580]     Audit-Id: 72e89203-74f2-4b09-85a3-9104dd1db397
	I0916 10:56:25.218498  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:25.218504  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:25.218507  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:25.218511  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:25.218516  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:25 GMT
	I0916 10:56:25.218708  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:25.413593  167544 request.go:632] Waited for 194.392785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:25.413662  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:25.413669  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:25.413676  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:25.413680  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:25.415891  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:25.415915  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:25.415924  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:25.415931  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:25.415935  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:25.415939  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:25 GMT
	I0916 10:56:25.415944  167544 round_trippers.go:580]     Audit-Id: a3ff4ccc-573b-4b75-8d5a-4ba5e01a3db5
	I0916 10:56:25.415948  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:25.416104  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:25.716356  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:25.716380  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:25.716395  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:25.716399  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:25.718351  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:25.718371  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:25.718378  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:25.718383  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:25.718385  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:25.718388  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:25.718391  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:25 GMT
	I0916 10:56:25.718395  167544 round_trippers.go:580]     Audit-Id: 0e5e3094-cb90-45c4-b0db-52700fcac711
	I0916 10:56:25.718559  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:25.813226  167544 request.go:632] Waited for 94.232494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:25.813351  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:25.813365  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:25.813376  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:25.813382  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:25.816030  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:25.816053  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:25.816061  167544 round_trippers.go:580]     Audit-Id: 7e8ca180-f4d6-44ba-aca6-25160bf8c52a
	I0916 10:56:25.816066  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:25.816069  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:25.816073  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:25.816076  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:25.816086  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:25 GMT
	I0916 10:56:25.816227  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:26.216833  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:26.216855  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:26.216870  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:26.216876  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:26.219029  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:26.219050  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:26.219058  167544 round_trippers.go:580]     Audit-Id: e20b7272-0be5-43b9-95fe-40b2fcabd13d
	I0916 10:56:26.219061  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:26.219064  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:26.219069  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:26.219073  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:26.219079  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:26 GMT
	I0916 10:56:26.219234  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:26.219671  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:26.219686  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:26.219692  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:26.219696  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:26.221506  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:26.221526  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:26.221536  167544 round_trippers.go:580]     Audit-Id: cecfb797-d022-4903-8009-d52dd0df23c0
	I0916 10:56:26.221543  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:26.221547  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:26.221554  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:26.221563  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:26.221571  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:26 GMT
	I0916 10:56:26.221714  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:26.221998  167544 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:26.716333  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:26.716361  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:26.716372  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:26.716380  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:26.718956  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:26.718985  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:26.718994  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:26.719000  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:26.719005  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:26.719011  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:26.719015  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:26 GMT
	I0916 10:56:26.719021  167544 round_trippers.go:580]     Audit-Id: 86903210-40c3-4f52-ba61-3a7fc4cf989e
	I0916 10:56:26.719195  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:26.719697  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:26.719716  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:26.719726  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:26.719731  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:26.721702  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:26.721722  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:26.721731  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:26 GMT
	I0916 10:56:26.721737  167544 round_trippers.go:580]     Audit-Id: 68c0ddd5-a488-43e7-8370-6db759c3f1ec
	I0916 10:56:26.721741  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:26.721745  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:26.721748  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:26.721752  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:26.721855  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:27.216600  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:27.216626  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:27.216633  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:27.216639  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:27.218966  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:27.218984  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:27.218993  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:27.218999  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:27.219003  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:27.219009  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:27 GMT
	I0916 10:56:27.219014  167544 round_trippers.go:580]     Audit-Id: bc1554ce-c8b1-4f24-b287-e058c7f36028
	I0916 10:56:27.219018  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:27.219191  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:27.219710  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:27.219728  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:27.219735  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:27.219738  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:27.221605  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:27.221625  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:27.221636  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:27 GMT
	I0916 10:56:27.221641  167544 round_trippers.go:580]     Audit-Id: fb190057-9d98-41d7-874c-3a939e961364
	I0916 10:56:27.221645  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:27.221649  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:27.221656  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:27.221660  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:27.221854  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:27.716670  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:27.716698  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:27.716709  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:27.716713  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:27.719015  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:27.719040  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:27.719049  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:27 GMT
	I0916 10:56:27.719057  167544 round_trippers.go:580]     Audit-Id: dfc6b68e-5e3a-4132-b270-14a1bab6f1d8
	I0916 10:56:27.719063  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:27.719069  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:27.719073  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:27.719079  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:27.719330  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:27.719724  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:27.719738  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:27.719744  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:27.719750  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:27.721945  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:27.721965  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:27.721972  167544 round_trippers.go:580]     Audit-Id: be81eff3-4d51-4a8b-b95a-8c98d36d16fd
	I0916 10:56:27.721976  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:27.721979  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:27.721982  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:27.721985  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:27.721987  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:27 GMT
	I0916 10:56:27.722153  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:28.216902  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:28.216927  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:28.216935  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:28.216939  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:28.219282  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:28.219302  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:28.219308  167544 round_trippers.go:580]     Audit-Id: d4d2cffe-8a5e-41cc-ad0a-f30010761121
	I0916 10:56:28.219312  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:28.219317  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:28.219319  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:28.219322  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:28.219327  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:28 GMT
	I0916 10:56:28.219441  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:28.219845  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:28.219859  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:28.219866  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:28.219870  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:28.221623  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:28.221644  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:28.221650  167544 round_trippers.go:580]     Audit-Id: f560aaa8-6a11-43af-b981-6e9bc0fd27f4
	I0916 10:56:28.221653  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:28.221655  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:28.221658  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:28.221660  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:28.221663  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:28 GMT
	I0916 10:56:28.221805  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:28.222091  167544 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:28.716384  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:28.716411  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:28.716420  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:28.716424  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:28.718731  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:28.718750  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:28.718757  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:28.718760  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:28 GMT
	I0916 10:56:28.718764  167544 round_trippers.go:580]     Audit-Id: b55c9492-7b75-4f84-a32a-3eb2c4067c03
	I0916 10:56:28.718767  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:28.718770  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:28.718772  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:28.718929  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:28.719335  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:28.719349  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:28.719356  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:28.719360  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:28.721292  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:28.721310  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:28.721318  167544 round_trippers.go:580]     Audit-Id: f99668b0-43e2-48d6-95e2-2bcdf4556cf4
	I0916 10:56:28.721326  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:28.721346  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:28.721353  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:28.721357  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:28.721362  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:28 GMT
	I0916 10:56:28.721921  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:29.216018  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:29.216045  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:29.216057  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:29.216064  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:29.218379  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:29.218398  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:29.218404  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:29 GMT
	I0916 10:56:29.218408  167544 round_trippers.go:580]     Audit-Id: c70f3c98-346a-46f9-b781-2a1c34926bd8
	I0916 10:56:29.218410  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:29.218414  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:29.218419  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:29.218425  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:29.218557  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:29.219073  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:29.219089  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:29.219097  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:29.219101  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:29.220725  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:29.220743  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:29.220749  167544 round_trippers.go:580]     Audit-Id: 74bddfe2-6ef4-48be-bbe5-a8e3782dafd5
	I0916 10:56:29.220752  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:29.220755  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:29.220759  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:29.220761  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:29.220765  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:29 GMT
	I0916 10:56:29.220933  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:29.716987  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:29.717015  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:29.717023  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:29.717029  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:29.719205  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:29.719230  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:29.719238  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:29.719241  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:29 GMT
	I0916 10:56:29.719245  167544 round_trippers.go:580]     Audit-Id: a7faf451-e6d8-4026-8584-5ccd2b8abfeb
	I0916 10:56:29.719248  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:29.719251  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:29.719254  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:29.719412  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:29.719791  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:29.719802  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:29.719809  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:29.719813  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:29.721580  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:29.721594  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:29.721599  167544 round_trippers.go:580]     Audit-Id: b203f515-d009-441b-b5f6-dbbcddf279d5
	I0916 10:56:29.721603  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:29.721606  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:29.721609  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:29.721612  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:29.721616  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:29 GMT
	I0916 10:56:29.721861  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:30.216586  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:30.216631  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.216642  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.216647  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.218868  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:30.218890  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.218899  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.218906  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.218911  167544 round_trippers.go:580]     Audit-Id: 55f71246-0ae1-4011-be82-a36fa986d743
	I0916 10:56:30.218915  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.218919  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.218922  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.219053  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"687","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:56:30.219500  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:30.219523  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.219533  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.219541  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.221457  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:30.221476  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.221483  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.221487  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.221489  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.221492  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.221497  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.221501  167544 round_trippers.go:580]     Audit-Id: 610b8f02-31b8-42a9-96da-91581d6c3312
	I0916 10:56:30.221696  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:30.716367  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:56:30.716399  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.716411  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.716415  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.718654  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:30.718679  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.718689  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.718695  167544 round_trippers.go:580]     Audit-Id: 2c934cc6-9f36-4909-97bb-0f0dc7ea4b2d
	I0916 10:56:30.718700  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.718708  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.718713  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.718719  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.718946  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"723","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5101 chars]
	I0916 10:56:30.719429  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:30.719446  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.719456  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.719463  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.721045  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:30.721059  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.721067  167544 round_trippers.go:580]     Audit-Id: d5d36d29-fa93-4b61-b131-694a3abe30e8
	I0916 10:56:30.721072  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.721076  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.721080  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.721085  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.721089  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.721192  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:30.721518  167544 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:30.721533  167544 pod_ready.go:82] duration metric: took 6.505754342s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:30.721543  167544 pod_ready.go:39] duration metric: took 7.509008938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:30.721556  167544 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:56:30.721619  167544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:56:30.731343  167544 command_runner.go:130] > 1035
	I0916 10:56:30.732144  167544 api_server.go:72] duration metric: took 11.31656924s to wait for apiserver process to appear ...
	I0916 10:56:30.732168  167544 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:56:30.732190  167544 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:56:30.736830  167544 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:56:30.736901  167544 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 10:56:30.736912  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.736920  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.736924  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.737716  167544 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:56:30.737738  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.737748  167544 round_trippers.go:580]     Audit-Id: 98f28cca-8d54-48d7-8e3e-097ab888c7c4
	I0916 10:56:30.737755  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.737764  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.737770  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.737778  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.737784  167544 round_trippers.go:580]     Content-Length: 263
	I0916 10:56:30.737787  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.737801  167544 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:56:30.737952  167544 api_server.go:141] control plane version: v1.31.1
	I0916 10:56:30.737970  167544 api_server.go:131] duration metric: took 5.797471ms to wait for apiserver health ...
	I0916 10:56:30.737978  167544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:56:30.738047  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:30.738054  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.738061  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.738069  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.740493  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:30.740513  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.740523  167544 round_trippers.go:580]     Audit-Id: 9019fc31-d88a-4433-9515-778348d73f57
	I0916 10:56:30.740529  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.740534  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.740539  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.740544  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.740556  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.741218  167544 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"723"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 91422 chars]
	I0916 10:56:30.743855  167544 system_pods.go:59] 12 kube-system pods found
	I0916 10:56:30.743906  167544 system_pods.go:61] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:56:30.743920  167544 system_pods.go:61] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:56:30.743925  167544 system_pods.go:61] "kindnet-2jtzj" [530fad1f-573c-4186-b57e-287f820fc065] Running
	I0916 10:56:30.743930  167544 system_pods.go:61] "kindnet-mckv5" [33f14b42-6960-4bd0-b467-60342a55aff6] Running
	I0916 10:56:30.743935  167544 system_pods.go:61] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:56:30.743940  167544 system_pods.go:61] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:56:30.743948  167544 system_pods.go:61] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:56:30.743953  167544 system_pods.go:61] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:56:30.743957  167544 system_pods.go:61] "kube-proxy-g86bs" [efc5e34d-fd17-408e-ad74-cd36ded784b3] Running
	I0916 10:56:30.743960  167544 system_pods.go:61] "kube-proxy-qds2d" [ac30bd54-b932-4f52-a53c-4edbc5eefc7c] Running
	I0916 10:56:30.743964  167544 system_pods.go:61] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:56:30.743967  167544 system_pods.go:61] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:56:30.743973  167544 system_pods.go:74] duration metric: took 5.990131ms to wait for pod list to return data ...
	I0916 10:56:30.743983  167544 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:56:30.744043  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:56:30.744051  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.744057  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.744061  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.746462  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:30.746481  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.746495  167544 round_trippers.go:580]     Content-Length: 261
	I0916 10:56:30.746498  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.746502  167544 round_trippers.go:580]     Audit-Id: 9977109f-baaa-4c6f-b141-74fa3b8141f6
	I0916 10:56:30.746505  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.746507  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.746514  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.746517  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.746532  167544 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"723"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3f54840f-e917-4b73-aac8-060ce8f211be","resourceVersion":"325","creationTimestamp":"2024-09-16T10:53:39Z"}}]}
	I0916 10:56:30.746699  167544 default_sa.go:45] found service account: "default"
	I0916 10:56:30.746714  167544 default_sa.go:55] duration metric: took 2.725835ms for default service account to be created ...
	I0916 10:56:30.746721  167544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:56:30.746770  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:30.746778  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.746784  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.746787  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.749473  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:30.749497  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.749507  167544 round_trippers.go:580]     Audit-Id: c58d2573-dea4-4f61-9f8d-a50109cbb6cb
	I0916 10:56:30.749514  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.749519  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.749524  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.749527  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.749532  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.750089  167544 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"723"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 91422 chars]
	I0916 10:56:30.752718  167544 system_pods.go:86] 12 kube-system pods found
	I0916 10:56:30.752742  167544 system_pods.go:89] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:56:30.752750  167544 system_pods.go:89] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:56:30.752754  167544 system_pods.go:89] "kindnet-2jtzj" [530fad1f-573c-4186-b57e-287f820fc065] Running
	I0916 10:56:30.752758  167544 system_pods.go:89] "kindnet-mckv5" [33f14b42-6960-4bd0-b467-60342a55aff6] Running
	I0916 10:56:30.752762  167544 system_pods.go:89] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:56:30.752768  167544 system_pods.go:89] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:56:30.752780  167544 system_pods.go:89] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:56:30.752787  167544 system_pods.go:89] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:56:30.752796  167544 system_pods.go:89] "kube-proxy-g86bs" [efc5e34d-fd17-408e-ad74-cd36ded784b3] Running
	I0916 10:56:30.752801  167544 system_pods.go:89] "kube-proxy-qds2d" [ac30bd54-b932-4f52-a53c-4edbc5eefc7c] Running
	I0916 10:56:30.752810  167544 system_pods.go:89] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:56:30.752816  167544 system_pods.go:89] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:56:30.752824  167544 system_pods.go:126] duration metric: took 6.096999ms to wait for k8s-apps to be running ...
	I0916 10:56:30.752835  167544 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:56:30.752878  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:56:30.763834  167544 system_svc.go:56] duration metric: took 10.990789ms WaitForService to wait for kubelet
	I0916 10:56:30.763860  167544 kubeadm.go:582] duration metric: took 11.348293521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:56:30.763875  167544 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:56:30.763943  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:56:30.763951  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:30.763957  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:30.763960  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:30.766437  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:30.766456  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:30.766464  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:30.766469  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:30 GMT
	I0916 10:56:30.766474  167544 round_trippers.go:580]     Audit-Id: 87f928e3-4190-4dfc-91f7-71cacb4a9e1f
	I0916 10:56:30.766479  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:30.766485  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:30.766489  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:30.766684  167544 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"723"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 20056 chars]
	I0916 10:56:30.767358  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:56:30.767375  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:56:30.767385  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:56:30.767388  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:56:30.767392  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:56:30.767395  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:56:30.767399  167544 node_conditions.go:105] duration metric: took 3.519808ms to run NodePressure ...
	I0916 10:56:30.767414  167544 start.go:241] waiting for startup goroutines ...
	I0916 10:56:30.767424  167544 start.go:246] waiting for cluster config update ...
	I0916 10:56:30.767431  167544 start.go:255] writing updated cluster config ...
	I0916 10:56:30.769664  167544 out.go:201] 
	I0916 10:56:30.771128  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:56:30.771207  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:56:30.772821  167544 out.go:177] * Starting "multinode-026168-m02" worker node in "multinode-026168" cluster
	I0916 10:56:30.773980  167544 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:56:30.775655  167544 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:56:30.777049  167544 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:56:30.777072  167544 cache.go:56] Caching tarball of preloaded images
	I0916 10:56:30.777081  167544 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:56:30.777174  167544 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:56:30.777188  167544 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:56:30.777287  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	W0916 10:56:30.795562  167544 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:56:30.795584  167544 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:56:30.795683  167544 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:56:30.795705  167544 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:56:30.795711  167544 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:56:30.795724  167544 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:56:30.795735  167544 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:56:30.796914  167544 image.go:273] response: 
	I0916 10:56:30.853618  167544 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:56:30.853648  167544 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:56:30.853680  167544 start.go:360] acquireMachinesLock for multinode-026168-m02: {Name:mk244ea9c32e56587b67dd9c9f2d4f0dcccd26e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:56:30.853741  167544 start.go:364] duration metric: took 43.337µs to acquireMachinesLock for "multinode-026168-m02"
	I0916 10:56:30.853758  167544 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:56:30.853764  167544 fix.go:54] fixHost starting: m02
	I0916 10:56:30.853982  167544 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:56:30.871620  167544 fix.go:112] recreateIfNeeded on multinode-026168-m02: state=Stopped err=<nil>
	W0916 10:56:30.871646  167544 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:56:30.873291  167544 out.go:177] * Restarting existing docker container for "multinode-026168-m02" ...
	I0916 10:56:30.874681  167544 cli_runner.go:164] Run: docker start multinode-026168-m02
	I0916 10:56:31.151195  167544 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:56:31.171572  167544 kic.go:430] container "multinode-026168-m02" state is running.
	I0916 10:56:31.171967  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:56:31.190168  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:56:31.190442  167544 machine.go:93] provisionDockerMachine start ...
	I0916 10:56:31.190510  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:31.208498  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:31.208725  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0916 10:56:31.208744  167544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:56:31.209426  167544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35880->127.0.0.1:32928: read: connection reset by peer
	I0916 10:56:34.340755  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:56:34.340787  167544 ubuntu.go:169] provisioning hostname "multinode-026168-m02"
	I0916 10:56:34.340847  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:34.358573  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:34.358770  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0916 10:56:34.358793  167544 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168-m02 && echo "multinode-026168-m02" | sudo tee /etc/hostname
	I0916 10:56:34.501299  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:56:34.501409  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:34.524509  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:34.524734  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0916 10:56:34.524752  167544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:56:34.661500  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:56:34.661531  167544 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:56:34.661552  167544 ubuntu.go:177] setting up certificates
	I0916 10:56:34.661566  167544 provision.go:84] configureAuth start
	I0916 10:56:34.661621  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:56:34.678694  167544 provision.go:143] copyHostCerts
	I0916 10:56:34.678728  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:56:34.678756  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:56:34.678763  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:56:34.678824  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:56:34.678895  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:56:34.678915  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:56:34.678921  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:56:34.678945  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:56:34.678991  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:56:34.679007  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:56:34.679013  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:56:34.679033  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:56:34.679080  167544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-026168-m02]
	I0916 10:56:34.854956  167544 provision.go:177] copyRemoteCerts
	I0916 10:56:34.855032  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:56:34.855064  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:34.874215  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:56:34.970063  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:56:34.970133  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:56:34.991429  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:56:34.991495  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:56:35.013731  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:56:35.013805  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:56:35.035547  167544 provision.go:87] duration metric: took 373.969235ms to configureAuth
	I0916 10:56:35.035573  167544 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:56:35.035757  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:56:35.035863  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:35.053256  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:35.053489  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0916 10:56:35.053511  167544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:56:35.309461  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:56:35.309495  167544 machine.go:96] duration metric: took 4.119030951s to provisionDockerMachine
	I0916 10:56:35.309513  167544 start.go:293] postStartSetup for "multinode-026168-m02" (driver="docker")
	I0916 10:56:35.309523  167544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:56:35.309588  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:56:35.309627  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:35.328751  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:56:35.422267  167544 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:56:35.425499  167544 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:56:35.425525  167544 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:56:35.425535  167544 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:56:35.425543  167544 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:56:35.425549  167544 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:56:35.425553  167544 command_runner.go:130] > ID=ubuntu
	I0916 10:56:35.425556  167544 command_runner.go:130] > ID_LIKE=debian
	I0916 10:56:35.425563  167544 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:56:35.425570  167544 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:56:35.425580  167544 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:56:35.425588  167544 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:56:35.425596  167544 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:56:35.425657  167544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:56:35.425688  167544 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:56:35.425698  167544 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:56:35.425707  167544 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:56:35.425719  167544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:56:35.425776  167544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:56:35.425844  167544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:56:35.425864  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:56:35.425989  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:56:35.434191  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:56:35.456244  167544 start.go:296] duration metric: took 146.710898ms for postStartSetup
	I0916 10:56:35.456323  167544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:56:35.456372  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:35.473031  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:56:35.565916  167544 command_runner.go:130] > 31%
	I0916 10:56:35.566123  167544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:56:35.570618  167544 command_runner.go:130] > 203G
	I0916 10:56:35.570652  167544 fix.go:56] duration metric: took 4.716886965s for fixHost
	I0916 10:56:35.570663  167544 start.go:83] releasing machines lock for "multinode-026168-m02", held for 4.71691306s
	I0916 10:56:35.570718  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:56:35.590148  167544 out.go:177] * Found network options:
	I0916 10:56:35.591772  167544 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 10:56:35.593468  167544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:56:35.593517  167544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:56:35.593596  167544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:56:35.593646  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:35.593677  167544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:56:35.593745  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:56:35.611704  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:56:35.612298  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:56:35.835673  167544 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:56:35.835708  167544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:56:35.840042  167544 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf.mk_disabled
	I0916 10:56:35.840065  167544 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:56:35.840071  167544 command_runner.go:130] > Device: c7h/199d	Inode: 535096      Links: 1
	I0916 10:56:35.840079  167544 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:35.840091  167544 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:35.840100  167544 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:35.840108  167544 command_runner.go:130] > Change: 2024-09-16 10:54:33.479990793 +0000
	I0916 10:56:35.840116  167544 command_runner.go:130] >  Birth: 2024-09-16 10:54:33.479990793 +0000
	I0916 10:56:35.840170  167544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:35.848461  167544 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:56:35.848526  167544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:35.856900  167544 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:56:35.856921  167544 start.go:495] detecting cgroup driver to use...
	I0916 10:56:35.856950  167544 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:56:35.856995  167544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:56:35.868334  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:56:35.878910  167544 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:56:35.878963  167544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:56:35.890706  167544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:56:35.901365  167544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:56:35.974905  167544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:56:36.050879  167544 docker.go:233] disabling docker service ...
	I0916 10:56:36.050945  167544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:56:36.063844  167544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:56:36.074363  167544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:56:36.147202  167544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:56:36.226938  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:56:36.238090  167544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:56:36.252289  167544 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:56:36.253247  167544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:56:36.253299  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:36.262485  167544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:56:36.262548  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:36.271804  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:36.280599  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:36.289514  167544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:56:36.298427  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:36.308208  167544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:36.316991  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:56:36.326129  167544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:56:36.333509  167544 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:56:36.334255  167544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:56:36.341774  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:36.418616  167544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:56:36.536576  167544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:56:36.536642  167544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:56:36.540048  167544 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:56:36.540076  167544 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:56:36.540094  167544 command_runner.go:130] > Device: d0h/208d	Inode: 190         Links: 1
	I0916 10:56:36.540106  167544 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:36.540115  167544 command_runner.go:130] > Access: 2024-09-16 10:56:36.525041936 +0000
	I0916 10:56:36.540124  167544 command_runner.go:130] > Modify: 2024-09-16 10:56:36.525041936 +0000
	I0916 10:56:36.540136  167544 command_runner.go:130] > Change: 2024-09-16 10:56:36.525041936 +0000
	I0916 10:56:36.540141  167544 command_runner.go:130] >  Birth: -
	I0916 10:56:36.540175  167544 start.go:563] Will wait 60s for crictl version
	I0916 10:56:36.540216  167544 ssh_runner.go:195] Run: which crictl
	I0916 10:56:36.543389  167544 command_runner.go:130] > /usr/bin/crictl
	I0916 10:56:36.543494  167544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:56:36.572667  167544 command_runner.go:130] > Version:  0.1.0
	I0916 10:56:36.572689  167544 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:56:36.572694  167544 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:56:36.572704  167544 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:56:36.574679  167544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:56:36.574767  167544 ssh_runner.go:195] Run: crio --version
	I0916 10:56:36.608903  167544 command_runner.go:130] > crio version 1.24.6
	I0916 10:56:36.608926  167544 command_runner.go:130] > Version:          1.24.6
	I0916 10:56:36.608932  167544 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:56:36.608937  167544 command_runner.go:130] > GitTreeState:     clean
	I0916 10:56:36.608942  167544 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:56:36.608946  167544 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:56:36.608951  167544 command_runner.go:130] > Compiler:         gc
	I0916 10:56:36.608955  167544 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:56:36.608959  167544 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:56:36.608967  167544 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:56:36.608974  167544 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:56:36.608978  167544 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:56:36.609041  167544 ssh_runner.go:195] Run: crio --version
	I0916 10:56:36.641985  167544 command_runner.go:130] > crio version 1.24.6
	I0916 10:56:36.642019  167544 command_runner.go:130] > Version:          1.24.6
	I0916 10:56:36.642031  167544 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:56:36.642038  167544 command_runner.go:130] > GitTreeState:     clean
	I0916 10:56:36.642048  167544 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:56:36.642056  167544 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:56:36.642064  167544 command_runner.go:130] > Compiler:         gc
	I0916 10:56:36.642080  167544 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:56:36.642089  167544 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:56:36.642111  167544 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:56:36.642124  167544 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:56:36.642132  167544 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:56:36.644296  167544 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:56:36.645711  167544 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:56:36.647193  167544 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:36.664506  167544 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:56:36.668356  167544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:36.678707  167544 mustload.go:65] Loading cluster: multinode-026168
	I0916 10:56:36.678937  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:56:36.679161  167544 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:56:36.696377  167544 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:56:36.696625  167544 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.3
	I0916 10:56:36.696636  167544 certs.go:194] generating shared ca certs ...
	I0916 10:56:36.696648  167544 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:36.696751  167544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:56:36.696788  167544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:56:36.696800  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:56:36.696814  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:56:36.696826  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:56:36.696838  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:56:36.696888  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:56:36.696924  167544 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:56:36.696933  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:56:36.696957  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:56:36.696979  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:56:36.697000  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:56:36.697038  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:56:36.697069  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:56:36.697088  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:56:36.697106  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:36.697132  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:56:36.721943  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:56:36.745063  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:56:36.767924  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:56:36.789899  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:56:36.812506  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:56:36.834470  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:56:36.856527  167544 ssh_runner.go:195] Run: openssl version
	I0916 10:56:36.861562  167544 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:56:36.861641  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:56:36.870708  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:56:36.874344  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:56:36.874477  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:56:36.874529  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:56:36.880660  167544 command_runner.go:130] > 51391683
	I0916 10:56:36.880905  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:56:36.889302  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:56:36.898382  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:56:36.901681  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:56:36.901714  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:56:36.901759  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:56:36.908283  167544 command_runner.go:130] > 3ec20f2e
	I0916 10:56:36.908355  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:56:36.916669  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:56:36.925361  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:36.928779  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:36.928802  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:36.928842  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:36.935097  167544 command_runner.go:130] > b5213941
	I0916 10:56:36.935352  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:56:36.943805  167544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:56:36.947220  167544 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:36.947260  167544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:36.947292  167544 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 crio false true} ...
	I0916 10:56:36.947382  167544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:56:36.947433  167544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:56:36.955620  167544 command_runner.go:130] > kubeadm
	I0916 10:56:36.955646  167544 command_runner.go:130] > kubectl
	I0916 10:56:36.955652  167544 command_runner.go:130] > kubelet
	I0916 10:56:36.956413  167544 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:56:36.956471  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:56:36.965613  167544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I0916 10:56:36.982813  167544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:56:36.999879  167544 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:56:37.003128  167544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:37.013573  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:37.087497  167544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:37.098331  167544 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:56:37.098552  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:56:37.100653  167544 out.go:177] * Verifying Kubernetes components...
	I0916 10:56:37.102233  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:37.175143  167544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:37.187417  167544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:56:37.187685  167544 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:37.187992  167544 node_ready.go:35] waiting up to 6m0s for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:56:37.188062  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:56:37.188069  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:37.188079  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.188084  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.190500  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.190518  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:37.190524  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.190528  167544 round_trippers.go:580]     Audit-Id: 1b0ff1a1-16e8-427b-85d6-70824ed30581
	I0916 10:56:37.190531  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.190534  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.190537  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:37.190540  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:37.190757  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"548","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6020 chars]
	I0916 10:56:37.191083  167544 node_ready.go:49] node "multinode-026168-m02" has status "Ready":"True"
	I0916 10:56:37.191101  167544 node_ready.go:38] duration metric: took 3.091224ms for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:56:37.191109  167544 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:37.191173  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:37.191181  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:37.191187  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.191195  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.194164  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.194180  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:37.194187  167544 round_trippers.go:580]     Audit-Id: b7064293-45bc-4bfd-a3c8-c312950384a7
	I0916 10:56:37.194192  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.194196  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.194201  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:37.194205  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:37.194210  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.195004  167544 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"730"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90937 chars]
	I0916 10:56:37.198721  167544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:37.198826  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:37.198837  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:37.198849  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.198856  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.200950  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.200972  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:37.200982  167544 round_trippers.go:580]     Audit-Id: dd4c2810-83ca-4512-a71d-0ffb7787df96
	I0916 10:56:37.200987  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.200991  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.200995  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:37.201000  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:37.201006  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.201117  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:37.201598  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:37.201612  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:37.201620  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.201627  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.203415  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:37.203431  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:37.203437  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:37.203443  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:37.203446  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.203449  167544 round_trippers.go:580]     Audit-Id: 58228ec4-0f5f-4351-8425-d49603632e27
	I0916 10:56:37.203452  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.203455  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.203619  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:37.699408  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:37.699430  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:37.699437  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.699442  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.701783  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.701806  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:37.701815  167544 round_trippers.go:580]     Audit-Id: fdca2c82-8d65-458f-908e-7d0b1b619ae8
	I0916 10:56:37.701820  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.701825  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.701829  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:37.701833  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:37.701841  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.702049  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:37.702541  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:37.702556  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:37.702563  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.702566  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.704432  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:37.704494  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:37.704508  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:37.704514  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:37.704520  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.704525  167544 round_trippers.go:580]     Audit-Id: 45fba1d6-2965-4fd1-b9a0-465ec62bba38
	I0916 10:56:37.704535  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.704541  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.704708  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:38.199248  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:38.199271  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:38.199280  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.199283  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.202095  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:38.202116  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:38.202126  167544 round_trippers.go:580]     Audit-Id: d065b87f-1ed4-4484-bc6f-c304053884d6
	I0916 10:56:38.202131  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.202135  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.202139  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:38.202143  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:38.202149  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.202366  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:38.202944  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:38.202962  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:38.202972  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.202978  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.204929  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:38.204947  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:38.204958  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.204964  167544 round_trippers.go:580]     Audit-Id: 7e5419c0-7171-461b-8d02-26727c6c5b28
	I0916 10:56:38.204970  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.204974  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.204980  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:38.204984  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:38.205166  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:38.699709  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:38.699739  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:38.699749  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.699756  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.701685  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:38.701708  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:38.701718  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.701723  167544 round_trippers.go:580]     Audit-Id: 73bf707c-e69d-4366-8fc9-b9081a1f5b80
	I0916 10:56:38.701728  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.701732  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.701736  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:38.701743  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:38.701935  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:38.702414  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:38.702429  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:38.702437  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.702443  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.704039  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:38.704059  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:38.704068  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:38.704072  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.704076  167544 round_trippers.go:580]     Audit-Id: df07077f-05c1-4f8e-9eba-b53d011e1d50
	I0916 10:56:38.704081  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.704086  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.704091  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:38.704230  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:39.198953  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:39.198980  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:39.198988  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.198993  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.201373  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:39.201395  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:39.201403  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:39.201408  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.201411  167544 round_trippers.go:580]     Audit-Id: 947d6438-1776-4457-828e-d4336a2d46a7
	I0916 10:56:39.201415  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.201420  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.201424  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:39.201646  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:39.202218  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:39.202237  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:39.202248  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.202253  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.204089  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:39.204109  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:39.204118  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.204123  167544 round_trippers.go:580]     Audit-Id: c51eb111-2c3f-47cb-b4b4-11c8a088bb98
	I0916 10:56:39.204127  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.204131  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.204134  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:39.204138  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:39.204247  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:39.204583  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:39.699168  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:39.699198  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:39.699210  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.699216  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.701505  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:39.701531  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:39.701540  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:39.701546  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:39.701550  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.701554  167544 round_trippers.go:580]     Audit-Id: 26661d01-5b66-4cb8-adf1-63923a72b5e3
	I0916 10:56:39.701557  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.701562  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.701763  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:39.702380  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:39.702397  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:39.702408  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.702415  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.704185  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:39.704204  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:39.704215  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:39.704222  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.704228  167544 round_trippers.go:580]     Audit-Id: 2075f92f-4fe9-4147-a4cb-93222e97bbfd
	I0916 10:56:39.704234  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.704238  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.704243  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:39.704398  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:40.199001  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:40.199025  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:40.199034  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.199038  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.201463  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:40.201485  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:40.201493  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:40.201498  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.201502  167544 round_trippers.go:580]     Audit-Id: 5cbcde59-9241-4fd8-8c07-c49a3c5282c7
	I0916 10:56:40.201505  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.201510  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.201515  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:40.201667  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:40.202190  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:40.202205  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:40.202212  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.202216  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.204250  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:40.204267  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:40.204275  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:40.204281  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.204284  167544 round_trippers.go:580]     Audit-Id: 77120143-8b78-4892-86b8-ec5204b794be
	I0916 10:56:40.204287  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.204293  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.204297  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:40.204467  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:40.699050  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:40.699075  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:40.699085  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.699093  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.701412  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:40.701433  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:40.701440  167544 round_trippers.go:580]     Audit-Id: 53db4f35-aaae-45bd-a189-ab03114f7961
	I0916 10:56:40.701446  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.701451  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.701454  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:40.701458  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:40.701462  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.701667  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:40.702120  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:40.702133  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:40.702139  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.702143  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.703837  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:40.703854  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:40.703862  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:40.703867  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.703871  167544 round_trippers.go:580]     Audit-Id: 336f166e-7218-4f2f-b713-b3e43f549934
	I0916 10:56:40.703875  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.703881  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.703886  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:40.704034  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:41.199766  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:41.199798  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:41.199811  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.199815  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.202348  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:41.202366  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:41.202373  167544 round_trippers.go:580]     Audit-Id: 0b2057a6-ad2a-475e-aa52-897a1fd13622
	I0916 10:56:41.202379  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.202382  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.202385  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:41.202388  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:41.202390  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.202689  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:41.203144  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:41.203158  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:41.203165  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.203168  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.205000  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:41.205018  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:41.205027  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:41.205032  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:41.205038  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.205042  167544 round_trippers.go:580]     Audit-Id: 803e45bc-5d88-4155-a176-5afdb1ee2a57
	I0916 10:56:41.205047  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.205051  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.205182  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:41.205530  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:41.699908  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:41.699933  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:41.699942  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.699947  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.702244  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:41.702262  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:41.702269  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.702273  167544 round_trippers.go:580]     Audit-Id: 7a189321-6a35-4be7-8a9c-5edb08e6fd36
	I0916 10:56:41.702276  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.702279  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.702282  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:41.702284  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:41.702460  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:41.703062  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:41.703079  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:41.703089  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.703096  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.705168  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:41.705182  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:41.705187  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:41.705191  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:41.705194  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.705197  167544 round_trippers.go:580]     Audit-Id: 2163923e-d339-4340-8440-0fd11e2882f8
	I0916 10:56:41.705200  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.705202  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.705412  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:42.199026  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:42.199052  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:42.199066  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.199071  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.201674  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:42.201696  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:42.201703  167544 round_trippers.go:580]     Audit-Id: d56cf6c4-cd11-40c9-bdad-4df2e422741f
	I0916 10:56:42.201707  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.201714  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.201719  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:42.201722  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:42.201727  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.201904  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:42.202516  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:42.202536  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:42.202547  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.202555  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.204574  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:42.204596  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:42.204603  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:42.204606  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:42.204610  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.204613  167544 round_trippers.go:580]     Audit-Id: 97964bc7-1160-499b-b12e-8607fbf4d265
	I0916 10:56:42.204616  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.204621  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.204767  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:42.699641  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:42.699663  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:42.699671  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.699676  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.701974  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:42.701996  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:42.702003  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.702008  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.702011  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:42.702014  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:42.702017  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.702020  167544 round_trippers.go:580]     Audit-Id: 0566259d-236f-400c-a1cb-d48171cb2c7b
	I0916 10:56:42.702250  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:42.702872  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:42.702890  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:42.702902  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.702911  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.704826  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:42.704847  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:42.704855  167544 round_trippers.go:580]     Audit-Id: 9e408e60-e565-4f21-88f7-0e1c07f11bc7
	I0916 10:56:42.704862  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.704869  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.704873  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:42.704876  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:42.704880  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.705035  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:43.199498  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:43.199526  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:43.199537  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.199543  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.201993  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:43.202020  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:43.202029  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.202035  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.202042  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:43.202047  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:43.202053  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.202061  167544 round_trippers.go:580]     Audit-Id: a5830a73-33bd-451e-9252-b50021f1aef1
	I0916 10:56:43.202190  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:43.202694  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:43.202710  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:43.202718  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.202721  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.204680  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:43.204701  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:43.204712  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.204719  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:43.204725  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:43.204730  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.204734  167544 round_trippers.go:580]     Audit-Id: 8107aa14-0f6d-49b4-a7a3-a5542a2d960b
	I0916 10:56:43.204739  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.204884  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:43.699571  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:43.699595  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:43.699605  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.699624  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.701649  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:43.701668  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:43.701675  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.701678  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:43.701681  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:43.701686  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.701690  167544 round_trippers.go:580]     Audit-Id: 8791810b-b404-407b-9c12-b319210f698a
	I0916 10:56:43.701705  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.701880  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:43.702396  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:43.702410  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:43.702416  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.702421  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.704024  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:43.704039  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:43.704047  167544 round_trippers.go:580]     Audit-Id: 00566ee9-c3a1-4c52-8c0e-252083ac7f0f
	I0916 10:56:43.704052  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.704055  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.704059  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:43.704063  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:43.704069  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.704224  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:43.704609  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:44.199917  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:44.199937  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:44.199944  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.199949  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.202170  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.202194  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:44.202204  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.202211  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:44.202215  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:44.202218  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.202222  167544 round_trippers.go:580]     Audit-Id: 8b0242a1-525d-4a60-b54c-c56cd9043a5b
	I0916 10:56:44.202226  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.202383  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:44.202891  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:44.202906  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:44.202913  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.202919  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.204790  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.204807  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:44.204822  167544 round_trippers.go:580]     Audit-Id: 739cf586-1094-4c6f-bf95-4c79f3a3286c
	I0916 10:56:44.204825  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.204829  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.204842  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:44.204846  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:44.204848  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.205050  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:44.699772  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:44.699796  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:44.699804  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.699811  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.702352  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.702379  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:44.702389  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.702395  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:44.702399  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:44.702404  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.702409  167544 round_trippers.go:580]     Audit-Id: ed7a232a-b84a-43e0-ad04-f59a0599ab7b
	I0916 10:56:44.702414  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.702528  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:44.703014  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:44.703028  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:44.703035  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.703040  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.705077  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.705096  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:44.705105  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.705110  167544 round_trippers.go:580]     Audit-Id: faf1024d-15cc-4250-aa8c-89053df13dad
	I0916 10:56:44.705116  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.705119  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.705127  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:44.705131  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:44.705358  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:45.198993  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:45.199020  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:45.199028  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.199033  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.201559  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:45.201582  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:45.201588  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.201594  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.201598  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:45.201602  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:45.201606  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.201611  167544 round_trippers.go:580]     Audit-Id: 35ff714b-51ef-4ee0-ac91-e6763de7a8cd
	I0916 10:56:45.201825  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:45.202327  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:45.202343  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:45.202350  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.202354  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.204075  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:45.204094  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:45.204104  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:45.204110  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.204115  167544 round_trippers.go:580]     Audit-Id: 13ed124c-013a-4ca8-8ad9-149a626c7f47
	I0916 10:56:45.204121  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.204127  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.204131  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:45.204249  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:45.698964  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:45.698988  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:45.698997  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.699000  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.701359  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:45.701377  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:45.701384  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.701390  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:45.701394  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:45.701398  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.701402  167544 round_trippers.go:580]     Audit-Id: 3b0a6abf-348b-4599-8be3-5aa1107e9cb6
	I0916 10:56:45.701406  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.701647  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:45.702198  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:45.702216  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:45.702223  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.702226  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.703846  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:45.703859  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:45.703864  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.703869  167544 round_trippers.go:580]     Audit-Id: 0adf93e5-abbd-40ee-af12-6a51c7f112d0
	I0916 10:56:45.703872  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.703875  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.703879  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:45.703883  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:45.704409  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:45.704910  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:46.199046  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:46.199071  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:46.199079  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:46.199083  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:46.201359  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:46.201380  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:46.201389  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:46.201394  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:46 GMT
	I0916 10:56:46.201399  167544 round_trippers.go:580]     Audit-Id: 5947f81b-3a41-466c-b579-cf7ef4609de6
	I0916 10:56:46.201405  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:46.201409  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:46.201412  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:46.201665  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:46.202121  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:46.202138  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:46.202148  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:46.202155  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:46.203849  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:46.203865  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:46.203870  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:46.203874  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:46.203879  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:46 GMT
	I0916 10:56:46.203882  167544 round_trippers.go:580]     Audit-Id: 3ed02903-2477-4299-a49e-e22cd2c0b42b
	I0916 10:56:46.203886  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:46.203891  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:46.204014  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:46.699734  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:46.699758  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:46.699769  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:46.699773  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:46.702185  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:46.702218  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:46.702226  167544 round_trippers.go:580]     Audit-Id: bea37202-85ea-4893-972b-daa6088c5802
	I0916 10:56:46.702231  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:46.702236  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:46.702241  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:46.702244  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:46.702249  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:46 GMT
	I0916 10:56:46.702472  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:46.702994  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:46.703008  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:46.703015  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:46.703019  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:46.704978  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:46.704994  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:46.705000  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:46.705003  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:46.705006  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:46.705008  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:46.705011  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:46 GMT
	I0916 10:56:46.705013  167544 round_trippers.go:580]     Audit-Id: 9c5ded72-df84-41a9-87b0-5bcc76439bfa
	I0916 10:56:46.705208  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:47.199917  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:47.199945  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:47.199954  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:47.199959  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:47.202232  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:47.202251  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:47.202258  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:47.202263  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:47.202266  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:47.202269  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:47.202273  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:47 GMT
	I0916 10:56:47.202275  167544 round_trippers.go:580]     Audit-Id: c20fd67f-c586-44f5-b74d-a04ccfe64ee6
	I0916 10:56:47.202377  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:47.202813  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:47.202826  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:47.202833  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:47.202841  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:47.204622  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:47.204641  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:47.204656  167544 round_trippers.go:580]     Audit-Id: d1bb34b8-53a8-4e19-9888-7a6d27a4dfb2
	I0916 10:56:47.204662  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:47.204667  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:47.204672  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:47.204676  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:47.204680  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:47 GMT
	I0916 10:56:47.204861  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:47.699631  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:47.699657  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:47.699667  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:47.699675  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:47.701893  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:47.701915  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:47.701922  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:47.701925  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:47 GMT
	I0916 10:56:47.701928  167544 round_trippers.go:580]     Audit-Id: 5b791037-b06d-4b15-b57a-15daac0ac96e
	I0916 10:56:47.701930  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:47.701933  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:47.701937  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:47.702091  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:47.702525  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:47.702536  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:47.702543  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:47.702548  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:47.704316  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:47.704329  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:47.704334  167544 round_trippers.go:580]     Audit-Id: e22bd4f4-cf82-4e86-9be0-d2cbd05a9e6b
	I0916 10:56:47.704340  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:47.704345  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:47.704348  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:47.704353  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:47.704359  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:47 GMT
	I0916 10:56:47.704483  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:47.704973  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:48.199133  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:48.199155  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:48.199163  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:48.199168  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:48.201562  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:48.201585  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:48.201594  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:48.201598  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:48.201603  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:48 GMT
	I0916 10:56:48.201608  167544 round_trippers.go:580]     Audit-Id: d00b96ad-bb36-442a-8b0c-53c848f3bba3
	I0916 10:56:48.201618  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:48.201622  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:48.201797  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:48.202333  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:48.202348  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:48.202355  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:48.202359  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:48.204396  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:48.204419  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:48.204427  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:48 GMT
	I0916 10:56:48.204435  167544 round_trippers.go:580]     Audit-Id: 7b3f880a-88c8-4a1a-bef0-a123effec3ab
	I0916 10:56:48.204441  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:48.204446  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:48.204450  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:48.204456  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:48.204624  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:48.699218  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:48.699248  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:48.699259  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:48.699264  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:48.701137  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:48.701154  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:48.701160  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:48.701164  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:48.701168  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:48.701170  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:48.701174  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:48 GMT
	I0916 10:56:48.701179  167544 round_trippers.go:580]     Audit-Id: be580afe-8ddf-4b9f-83a3-fba74e5087bf
	I0916 10:56:48.701294  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:48.701771  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:48.701788  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:48.701795  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:48.701798  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:48.703504  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:48.703523  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:48.703530  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:48.703534  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:48.703536  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:48 GMT
	I0916 10:56:48.703539  167544 round_trippers.go:580]     Audit-Id: 2d70c285-acb8-47d9-a83e-e637ddb1ed2b
	I0916 10:56:48.703543  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:48.703548  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:48.703704  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:49.199359  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:49.199384  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:49.199392  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:49.199396  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:49.201930  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:49.201949  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:49.201956  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:49 GMT
	I0916 10:56:49.201960  167544 round_trippers.go:580]     Audit-Id: 896e8d27-ca23-4cd6-a918-9881e0c1a289
	I0916 10:56:49.201963  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:49.201967  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:49.201970  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:49.201977  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:49.202186  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:49.202640  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:49.202656  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:49.202663  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:49.202667  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:49.204416  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:49.204432  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:49.204438  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:49.204443  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:49 GMT
	I0916 10:56:49.204445  167544 round_trippers.go:580]     Audit-Id: 92c0bcf4-12d8-4ca2-8623-39a931591cc8
	I0916 10:56:49.204449  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:49.204453  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:49.204457  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:49.204587  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:49.699479  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:49.699503  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:49.699511  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:49.699516  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:49.701975  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:49.701996  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:49.702006  167544 round_trippers.go:580]     Audit-Id: 41f7f26c-cbbd-45c6-ace6-5821701bf87f
	I0916 10:56:49.702012  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:49.702018  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:49.702021  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:49.702026  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:49.702031  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:49 GMT
	I0916 10:56:49.702205  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:49.702710  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:49.702728  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:49.702735  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:49.702738  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:49.704369  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:49.704386  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:49.704395  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:49.704400  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:49 GMT
	I0916 10:56:49.704405  167544 round_trippers.go:580]     Audit-Id: ca4f6caa-4ef7-4696-a6c3-c445c1c5e7d2
	I0916 10:56:49.704410  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:49.704414  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:49.704418  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:49.704553  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:50.199201  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:50.199228  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:50.199236  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:50.199240  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:50.202029  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:50.202053  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:50.202061  167544 round_trippers.go:580]     Audit-Id: 83cce9e7-7f5d-4788-a96c-5f83e3214f44
	I0916 10:56:50.202067  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:50.202071  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:50.202076  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:50.202082  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:50.202088  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:50 GMT
	I0916 10:56:50.202273  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:50.202757  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:50.202773  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:50.202783  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:50.202789  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:50.204662  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:50.204682  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:50.204692  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:50 GMT
	I0916 10:56:50.204697  167544 round_trippers.go:580]     Audit-Id: 2ea808d7-b892-4d6c-98d5-0df03975cbbf
	I0916 10:56:50.204701  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:50.204706  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:50.204710  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:50.204714  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:50.204908  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:50.205270  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:50.699150  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:50.699172  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:50.699182  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:50.699187  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:50.701397  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:50.701420  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:50.701431  167544 round_trippers.go:580]     Audit-Id: 852e8e5c-220d-4c10-873e-26266cab61a8
	I0916 10:56:50.701435  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:50.701439  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:50.701442  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:50.701447  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:50.701452  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:50 GMT
	I0916 10:56:50.701653  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:50.702127  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:50.702141  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:50.702148  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:50.702151  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:50.704062  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:50.704080  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:50.704088  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:50.704094  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:50 GMT
	I0916 10:56:50.704098  167544 round_trippers.go:580]     Audit-Id: ac0c4311-33af-4c21-8c26-4cd53fbfd649
	I0916 10:56:50.704102  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:50.704106  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:50.704111  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:50.704257  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:51.198917  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:51.198942  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:51.198950  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:51.198953  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:51.201110  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:51.201131  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:51.201140  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:51.201145  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:51.201148  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:51.201153  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:51.201158  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:51 GMT
	I0916 10:56:51.201161  167544 round_trippers.go:580]     Audit-Id: 6b554797-194b-42f6-bdfc-7291f45e1d6b
	I0916 10:56:51.201394  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:51.201877  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:51.201892  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:51.201902  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:51.201915  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:51.203619  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:51.203634  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:51.203644  167544 round_trippers.go:580]     Audit-Id: 73472fa3-a933-44de-b1b7-4b3b03bfb070
	I0916 10:56:51.203650  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:51.203654  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:51.203659  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:51.203664  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:51.203668  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:51 GMT
	I0916 10:56:51.203791  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:51.699536  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:51.699561  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:51.699569  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:51.699572  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:51.701893  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:51.701919  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:51.701928  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:51.701935  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:51 GMT
	I0916 10:56:51.701939  167544 round_trippers.go:580]     Audit-Id: 927647a4-8f51-4a2e-b510-e28217ae25c4
	I0916 10:56:51.701943  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:51.701948  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:51.701959  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:51.702150  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:51.702629  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:51.702644  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:51.702651  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:51.702655  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:51.704504  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:51.704524  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:51.704533  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:51.704540  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:51.704546  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:51.704550  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:51.704555  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:51 GMT
	I0916 10:56:51.704559  167544 round_trippers.go:580]     Audit-Id: 9142d48a-208d-47ed-8ff9-e14610d99339
	I0916 10:56:51.704683  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:52.199267  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:52.199294  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:52.199304  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:52.199309  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:52.201659  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:52.201679  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:52.201685  167544 round_trippers.go:580]     Audit-Id: 85d1fc5c-301c-4f03-9fea-c06570e2f858
	I0916 10:56:52.201689  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:52.201692  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:52.201695  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:52.201698  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:52.201700  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:52 GMT
	I0916 10:56:52.201894  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:52.202400  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:52.202415  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:52.202424  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:52.202428  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:52.204380  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:52.204393  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:52.204398  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:52.204402  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:52 GMT
	I0916 10:56:52.204405  167544 round_trippers.go:580]     Audit-Id: b7f899cc-04da-4143-8f8b-5a3ac95fa31d
	I0916 10:56:52.204407  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:52.204411  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:52.204413  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:52.204603  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:52.699545  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:52.699569  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:52.699585  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:52.699590  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:52.702296  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:52.702318  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:52.702326  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:52.702331  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:52.702335  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:52 GMT
	I0916 10:56:52.702339  167544 round_trippers.go:580]     Audit-Id: 239c4047-bb7f-495d-8b44-01f2de6a1528
	I0916 10:56:52.702343  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:52.702347  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:52.702532  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:52.703003  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:52.703018  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:52.703025  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:52.703028  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:52.704917  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:52.704935  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:52.704944  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:52 GMT
	I0916 10:56:52.704949  167544 round_trippers.go:580]     Audit-Id: 2e9f11b8-54ec-4fdd-99a5-f8cdf45c4a55
	I0916 10:56:52.704952  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:52.704956  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:52.704961  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:52.704967  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:52.705113  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:52.705448  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:53.199780  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:53.199804  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:53.199814  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:53.199820  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:53.202118  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:53.202137  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:53.202143  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:53 GMT
	I0916 10:56:53.202149  167544 round_trippers.go:580]     Audit-Id: f93c0d2a-0d1f-4581-94b7-393cac536086
	I0916 10:56:53.202151  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:53.202154  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:53.202158  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:53.202161  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:53.202309  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:53.202804  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:53.202819  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:53.202828  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:53.202833  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:53.204719  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:53.204740  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:53.204748  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:53.204754  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:53.204759  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:53.204766  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:53 GMT
	I0916 10:56:53.204771  167544 round_trippers.go:580]     Audit-Id: eda53d6c-da31-48f9-8453-060ffae8b229
	I0916 10:56:53.204777  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:53.204896  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:53.699526  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:53.699548  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:53.699556  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:53.699560  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:53.701818  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:53.701846  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:53.701856  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:53.701863  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:53.701868  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:53.701873  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:53 GMT
	I0916 10:56:53.701878  167544 round_trippers.go:580]     Audit-Id: d6b08d54-5186-443a-84d2-4d486ea73ac9
	I0916 10:56:53.701887  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:53.702061  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:53.702542  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:53.702554  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:53.702561  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:53.702564  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:53.704311  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:53.704327  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:53.704333  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:53.704336  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:53.704339  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:53.704342  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:53.704345  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:53 GMT
	I0916 10:56:53.704347  167544 round_trippers.go:580]     Audit-Id: 3e2dfe16-7f50-4047-acde-da6eb1d8a7a4
	I0916 10:56:53.704518  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:54.199162  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:54.199189  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:54.199195  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:54.199201  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:54.201693  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:54.201719  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:54.201729  167544 round_trippers.go:580]     Audit-Id: fa8bfe44-190b-4353-930c-83580eea3bbc
	I0916 10:56:54.201736  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:54.201739  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:54.201743  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:54.201748  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:54.201752  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:54 GMT
	I0916 10:56:54.201965  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:54.202462  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:54.202477  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:54.202484  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:54.202488  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:54.204269  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:54.204287  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:54.204299  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:54 GMT
	I0916 10:56:54.204304  167544 round_trippers.go:580]     Audit-Id: 4d194abd-9af0-48a0-b821-e6c383d3061a
	I0916 10:56:54.204309  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:54.204317  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:54.204320  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:54.204324  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:54.204512  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:54.699258  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:54.699284  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:54.699292  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:54.699297  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:54.701483  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:54.701501  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:54.701507  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:54.701512  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:54.701515  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:54 GMT
	I0916 10:56:54.701519  167544 round_trippers.go:580]     Audit-Id: e7f8bbac-1032-470a-974c-01c68ebb48e1
	I0916 10:56:54.701523  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:54.701527  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:54.701717  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:54.702307  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:54.702326  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:54.702339  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:54.702344  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:54.704230  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:54.704262  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:54.704273  167544 round_trippers.go:580]     Audit-Id: 85bcd2c9-c938-41ee-8693-749335e56d87
	I0916 10:56:54.704279  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:54.704284  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:54.704288  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:54.704292  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:54.704296  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:54 GMT
	I0916 10:56:54.704406  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:55.199011  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:55.199035  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:55.199044  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:55.199049  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:55.201268  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:55.201291  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:55.201298  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:55.201301  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:55.201304  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:55.201307  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:55.201310  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:55 GMT
	I0916 10:56:55.201313  167544 round_trippers.go:580]     Audit-Id: 93a59ce4-b9f4-439a-893c-84a57b69105d
	I0916 10:56:55.201509  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:55.202050  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:55.202065  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:55.202073  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:55.202078  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:55.204004  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:55.204024  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:55.204034  167544 round_trippers.go:580]     Audit-Id: 3cd40ea4-c466-4348-92d1-930c19ccf7c6
	I0916 10:56:55.204040  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:55.204045  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:55.204049  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:55.204053  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:55.204057  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:55 GMT
	I0916 10:56:55.204202  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:55.204637  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:55.699905  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:55.699926  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:55.699934  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:55.699937  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:55.702118  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:55.702135  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:55.702142  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:55.702146  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:55.702149  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:55 GMT
	I0916 10:56:55.702152  167544 round_trippers.go:580]     Audit-Id: 75b4f87d-f35d-4c03-8e45-c5bf676d9cba
	I0916 10:56:55.702156  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:55.702159  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:55.702331  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:55.702865  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:55.702879  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:55.702889  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:55.702896  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:55.704668  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:55.704683  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:55.704692  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:55.704696  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:55.704700  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:55.704704  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:55 GMT
	I0916 10:56:55.704708  167544 round_trippers.go:580]     Audit-Id: 2d2a5011-d1cd-461c-bc1c-3306145641c1
	I0916 10:56:55.704716  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:55.704808  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:56.199505  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:56.199533  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:56.199546  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:56.199552  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:56.202057  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:56.202089  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:56.202098  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:56.202104  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:56 GMT
	I0916 10:56:56.202108  167544 round_trippers.go:580]     Audit-Id: ab9084a6-b515-494d-99fd-b2632f57ab29
	I0916 10:56:56.202111  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:56.202115  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:56.202119  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:56.202316  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:56.203074  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:56.203093  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:56.203104  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:56.203110  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:56.204978  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:56.204999  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:56.205007  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:56.205011  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:56.205015  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:56.205019  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:56.205022  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:56 GMT
	I0916 10:56:56.205024  167544 round_trippers.go:580]     Audit-Id: fe440858-b33a-464d-92c0-90e402298adf
	I0916 10:56:56.205193  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:56.699941  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:56.699971  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:56.699981  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:56.699986  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:56.702344  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:56.702366  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:56.702375  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:56.702379  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:56.702384  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:56 GMT
	I0916 10:56:56.702387  167544 round_trippers.go:580]     Audit-Id: 8f621466-01ea-460d-9e03-97a4926d433c
	I0916 10:56:56.702391  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:56.702395  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:56.702539  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:56.703074  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:56.703097  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:56.703107  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:56.703112  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:56.704874  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:56.704890  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:56.704899  167544 round_trippers.go:580]     Audit-Id: 87619619-8f2d-41fc-a195-189a57553347
	I0916 10:56:56.704905  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:56.704912  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:56.704916  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:56.704923  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:56.704927  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:56 GMT
	I0916 10:56:56.705062  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:57.199766  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:57.199792  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:57.199801  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:57.199806  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:57.202138  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:57.202167  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:57.202176  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:57.202181  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:57 GMT
	I0916 10:56:57.202185  167544 round_trippers.go:580]     Audit-Id: a03e3fb6-3757-43ae-aa4f-5f722db9d7ca
	I0916 10:56:57.202190  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:57.202195  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:57.202199  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:57.202414  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:57.202907  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:57.202921  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:57.202928  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:57.202932  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:57.204789  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:57.204811  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:57.204819  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:57 GMT
	I0916 10:56:57.204825  167544 round_trippers.go:580]     Audit-Id: 6694b15d-fa4a-4a99-b6bd-e8272fc80b0c
	I0916 10:56:57.204829  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:57.204833  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:57.204837  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:57.204840  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:57.204974  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:57.205269  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:57.699942  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:57.699975  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:57.699984  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:57.699988  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:57.702435  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:57.702460  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:57.702468  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:57 GMT
	I0916 10:56:57.702472  167544 round_trippers.go:580]     Audit-Id: 6947c7c1-f73b-4c08-bc9d-b19bee76bcc0
	I0916 10:56:57.702476  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:57.702485  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:57.702491  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:57.702497  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:57.702632  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:57.703258  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:57.703281  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:57.703291  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:57.703297  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:57.705171  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:57.705189  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:57.705198  167544 round_trippers.go:580]     Audit-Id: 951fbbd5-766e-4884-a237-d98febc2321b
	I0916 10:56:57.705203  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:57.705208  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:57.705212  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:57.705222  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:57.705225  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:57 GMT
	I0916 10:56:57.705365  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:58.199033  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:58.199059  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:58.199067  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:58.199071  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:58.201539  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:58.201569  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:58.201580  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:58 GMT
	I0916 10:56:58.201585  167544 round_trippers.go:580]     Audit-Id: bd22d2ea-ebc6-439d-88ea-ef65ab312802
	I0916 10:56:58.201589  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:58.201592  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:58.201594  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:58.201597  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:58.201797  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:58.202297  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:58.202312  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:58.202319  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:58.202322  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:58.204431  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:58.204455  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:58.204465  167544 round_trippers.go:580]     Audit-Id: b5a64987-7be1-4ef1-b085-ec64e2f8e237
	I0916 10:56:58.204473  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:58.204479  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:58.204484  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:58.204488  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:58.204492  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:58 GMT
	I0916 10:56:58.204590  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:58.699118  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:58.699140  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:58.699149  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:58.699154  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:58.701271  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:58.701291  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:58.701300  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:58 GMT
	I0916 10:56:58.701305  167544 round_trippers.go:580]     Audit-Id: 15d1a355-bd6a-41e6-91b1-6ed6522c12b9
	I0916 10:56:58.701310  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:58.701313  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:58.701319  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:58.701322  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:58.701464  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:58.701950  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:58.701964  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:58.701971  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:58.701975  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:58.703822  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:58.703840  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:58.703848  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:58.703854  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:58.703858  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:58.703863  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:58.703868  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:58 GMT
	I0916 10:56:58.703873  167544 round_trippers.go:580]     Audit-Id: cf05c4df-a1f8-4af0-9fac-14c439c55e50
	I0916 10:56:58.703974  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:59.199627  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:59.199652  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:59.199667  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.199671  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.201957  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.201977  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:59.201982  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.201986  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:59.201990  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:59.201993  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.201995  167544 round_trippers.go:580]     Audit-Id: a10ec318-6649-423a-9c4b-49a6872aee78
	I0916 10:56:59.201998  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.202200  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:59.202686  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:59.202700  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:59.202707  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.202711  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.204451  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.204471  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:59.204477  167544 round_trippers.go:580]     Audit-Id: f9b8bca8-8d44-44d8-99f7-a76845cb469c
	I0916 10:56:59.204488  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.204491  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.204494  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:59.204496  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:59.204499  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.204627  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:59.699663  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:56:59.699685  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:59.699693  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.699697  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.702229  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.702260  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:59.702270  167544 round_trippers.go:580]     Audit-Id: 4d66824f-6f0b-4c49-bf8c-d0cf11986d46
	I0916 10:56:59.702277  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.702283  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.702287  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:59.702291  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:59.702296  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.702385  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:56:59.702868  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:56:59.702878  167544 round_trippers.go:469] Request Headers:
	I0916 10:56:59.702885  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.702890  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.704650  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.704668  167544 round_trippers.go:577] Response Headers:
	I0916 10:56:59.704677  167544 round_trippers.go:580]     Audit-Id: c5fd81b8-5d65-4494-b223-859210b0fb2b
	I0916 10:56:59.704682  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.704686  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.704692  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:56:59.704697  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:56:59.704703  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.704827  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:56:59.705150  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:57:00.199616  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:00.199638  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:00.199646  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.199649  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.201852  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:00.201871  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:00.201877  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.201884  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:00.201890  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:00.201895  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.201899  167544 round_trippers.go:580]     Audit-Id: 44b10252-58a8-4c19-b60c-4eb2ffc9c525
	I0916 10:57:00.201903  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.202045  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:57:00.202527  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:00.202543  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:00.202550  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.202553  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.204528  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:00.204551  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:00.204558  167544 round_trippers.go:580]     Audit-Id: 3b2c4cf5-4321-4864-9a91-dd6cbbdbc25c
	I0916 10:57:00.204562  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.204566  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.204569  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:00.204571  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:00.204574  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.204743  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:00.699399  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:00.699430  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:00.699441  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.699448  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.702053  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:00.702075  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:00.702085  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.702096  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.702106  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:00.702120  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:00.702123  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.702127  167544 round_trippers.go:580]     Audit-Id: 32e2c634-788a-41af-8a5c-997c6d5a38af
	I0916 10:57:00.702239  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:57:00.702700  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:00.702713  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:00.702719  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.702723  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.704536  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:00.704566  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:00.704576  167544 round_trippers.go:580]     Audit-Id: afa5655a-3f92-4573-ae38-9c5d666dd46b
	I0916 10:57:00.704582  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.704587  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.704592  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:00.704597  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:00.704601  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.704715  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:01.199494  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:01.199519  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:01.199527  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.199530  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.201822  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.201841  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:01.201847  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:01.201853  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.201858  167544 round_trippers.go:580]     Audit-Id: 6596b691-1254-49db-bca6-81ef015418a8
	I0916 10:57:01.201863  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.201867  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.201872  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:01.201980  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:57:01.202418  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:01.202431  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:01.202438  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.202442  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.204244  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:01.204259  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:01.204265  167544 round_trippers.go:580]     Audit-Id: 90ec4dba-b02a-4bf1-a0bf-a545a85736f1
	I0916 10:57:01.204271  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.204274  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.204277  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:01.204280  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:01.204284  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.204391  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:01.699029  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:01.699054  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:01.699062  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.699065  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.701375  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.701399  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:01.701406  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.701410  167544 round_trippers.go:580]     Audit-Id: 0cd0bef5-97f6-44dc-994b-433add299d93
	I0916 10:57:01.701413  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.701417  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.701420  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:01.701423  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:01.701596  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:57:01.702061  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:01.702073  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:01.702081  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.702085  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.703955  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:01.703969  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:01.703975  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.703980  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:01.703982  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:01.703985  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.703988  167544 round_trippers.go:580]     Audit-Id: 0c679950-0af9-4266-8421-51bfafe9befe
	I0916 10:57:01.703991  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.704134  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:02.199852  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:02.199878  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:02.199886  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:02.199890  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:02.202203  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:02.202228  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:02.202236  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:02 GMT
	I0916 10:57:02.202242  167544 round_trippers.go:580]     Audit-Id: bd296a32-0910-40e5-b653-97e2dbece877
	I0916 10:57:02.202246  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:02.202250  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:02.202257  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:02.202260  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:02.202473  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:57:02.202973  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:02.202991  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:02.202999  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:02.203003  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:02.204648  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:02.204676  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:02.204682  167544 round_trippers.go:580]     Audit-Id: 628d1fe6-13eb-4020-b17c-86c2a33172e8
	I0916 10:57:02.204687  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:02.204691  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:02.204695  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:02.204701  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:02.204705  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:02 GMT
	I0916 10:57:02.204842  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:02.205235  167544 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:57:02.699994  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:02.700022  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:02.700034  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:02.700039  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:02.702389  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:02.702417  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:02.702425  167544 round_trippers.go:580]     Audit-Id: 6c47f4bc-8ecb-4d9d-8403-b788907e5b04
	I0916 10:57:02.702430  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:02.702433  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:02.702436  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:02.702439  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:02.702443  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:02 GMT
	I0916 10:57:02.702676  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:57:02.703179  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:02.703194  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:02.703203  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:02.703209  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:02.704861  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:02.704876  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:02.704884  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:02.704889  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:02.704892  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:02.704896  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:02 GMT
	I0916 10:57:02.704899  167544 round_trippers.go:580]     Audit-Id: 9909f5f4-c7f5-4fa3-9bc4-aceac1d1acb2
	I0916 10:57:02.704901  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:02.705082  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:03.199745  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:03.199774  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.199782  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.199786  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.202319  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:03.202347  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.202358  167544 round_trippers.go:580]     Audit-Id: d57cc0e6-d1b3-417b-9d4c-e517ccab75f6
	I0916 10:57:03.202363  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.202367  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.202371  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.202374  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.202379  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.202534  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"712","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:57:03.203035  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:03.203051  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.203060  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.203064  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.204982  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.205006  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.205014  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.205017  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.205021  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.205023  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.205026  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.205029  167544 round_trippers.go:580]     Audit-Id: 13acd256-8f7d-496a-8b71-26915373e785
	I0916 10:57:03.205173  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:03.699753  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:03.699777  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.699787  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.699791  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.701807  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:03.701830  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.701842  167544 round_trippers.go:580]     Audit-Id: f7306188-d649-4305-ab1c-88c163e52f22
	I0916 10:57:03.701847  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.701851  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.701856  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.701863  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.701867  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.701995  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"798","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:57:03.702485  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:03.702502  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.702512  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.702517  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.704086  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.704106  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.704116  167544 round_trippers.go:580]     Audit-Id: 34c4c080-a80d-4246-9675-2bbf5a10b956
	I0916 10:57:03.704123  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.704127  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.704131  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.704136  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.704140  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.704241  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:03.704544  167544 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:03.704560  167544 pod_ready.go:82] duration metric: took 26.505812501s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.704573  167544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.704632  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:57:03.704642  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.704668  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.704677  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.706290  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.706312  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.706321  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.706328  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.706333  167544 round_trippers.go:580]     Audit-Id: 944b8b51-359e-415a-899e-f5dcdcc80a02
	I0916 10:57:03.706339  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.706344  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.706348  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.706457  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"724","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6575 chars]
	I0916 10:57:03.706861  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:03.706874  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.706884  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.706888  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.708489  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.708505  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.708511  167544 round_trippers.go:580]     Audit-Id: 08162dfa-04b8-4d31-a7bb-5c16574a5432
	I0916 10:57:03.708515  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.708521  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.708525  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.708531  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.708535  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.708646  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:03.708917  167544 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:03.708930  167544 pod_ready.go:82] duration metric: took 4.351101ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.708945  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.709003  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:57:03.709010  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.709016  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.709021  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.710721  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.710736  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.710745  167544 round_trippers.go:580]     Audit-Id: 1867bac6-17f8-44f8-bb45-5a4fb43211a7
	I0916 10:57:03.710752  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.710757  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.710761  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.710768  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.710775  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.710955  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"732","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 9107 chars]
	I0916 10:57:03.711356  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:03.711371  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.711380  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.711385  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.712850  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.712864  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.712870  167544 round_trippers.go:580]     Audit-Id: ad82164d-dea5-4331-807a-b37e76928756
	I0916 10:57:03.712876  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.712880  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.712882  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.712886  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.712889  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.712983  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:03.713256  167544 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:03.713270  167544 pod_ready.go:82] duration metric: took 4.317129ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.713278  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.713321  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:57:03.713328  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.713359  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.713371  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.714943  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.714970  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.714977  167544 round_trippers.go:580]     Audit-Id: b96cddf6-72a6-4539-80d9-ba44235c3e0d
	I0916 10:57:03.714983  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.714987  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.714990  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.714994  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.714997  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.715150  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"725","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8897 chars]
	I0916 10:57:03.715660  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:03.715679  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.715686  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.715692  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.717402  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.717419  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.717428  167544 round_trippers.go:580]     Audit-Id: 6f9cc1f2-2afb-4a63-9eea-c85c50bc9974
	I0916 10:57:03.717436  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.717440  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.717444  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.717449  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.717454  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.717561  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:03.717892  167544 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:03.717909  167544 pod_ready.go:82] duration metric: took 4.625683ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.717921  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.717976  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:57:03.717984  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.717992  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.717998  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.719540  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.719556  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.719566  167544 round_trippers.go:580]     Audit-Id: f09ddcfc-7998-450e-9a35-767c055cda22
	I0916 10:57:03.719570  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.719574  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.719577  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.719581  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.719584  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.719729  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"711","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:57:03.720111  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:03.720125  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.720131  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.720134  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.721570  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:03.721589  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.721596  167544 round_trippers.go:580]     Audit-Id: 83bc7b8c-eeed-4b8e-9668-5e8fe96cc30e
	I0916 10:57:03.721600  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.721603  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.721606  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.721608  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.721612  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.721736  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:03.722106  167544 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:03.722128  167544 pod_ready.go:82] duration metric: took 4.196154ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.722140  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:03.900596  167544 request.go:632] Waited for 178.391357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:03.900665  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:03.900672  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:03.900678  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:03.900682  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:03.902961  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:03.902979  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:03.902985  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:03.902989  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:03.902994  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:03.902997  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:03.902999  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:03 GMT
	I0916 10:57:03.903002  167544 round_trippers.go:580]     Audit-Id: 5d3ffeb8-8883-4a92-85f2-c7b8c9a84500
	I0916 10:57:03.903162  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"587","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:57:04.099912  167544 request.go:632] Waited for 196.300336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:04.099973  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:04.099978  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:04.099985  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:04.099988  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:04.102176  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:04.102204  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:04.102212  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:04.102217  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:04.102220  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:04.102223  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:04.102226  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:04 GMT
	I0916 10:57:04.102229  167544 round_trippers.go:580]     Audit-Id: 20532544-dc8a-43c5-8529-ffb163a40ca3
	I0916 10:57:04.102322  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"605","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5735 chars]
	I0916 10:57:04.102624  167544 pod_ready.go:93] pod "kube-proxy-g86bs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:04.102640  167544 pod_ready.go:82] duration metric: took 380.491204ms for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:04.102649  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:04.300617  167544 request.go:632] Waited for 197.906059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:57:04.300699  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:57:04.300705  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:04.300712  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:04.300718  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:04.302995  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:04.303017  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:04.303027  167544 round_trippers.go:580]     Audit-Id: 521985c9-c851-4b11-ba6d-97d5bd5ead95
	I0916 10:57:04.303031  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:04.303037  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:04.303042  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:04.303046  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:04.303050  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:04 GMT
	I0916 10:57:04.303209  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qds2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac30bd54-b932-4f52-a53c-4edbc5eefc7c","resourceVersion":"784","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:57:04.499945  167544 request.go:632] Waited for 196.302164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:57:04.500024  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:57:04.500030  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:04.500037  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:04.500044  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:04.502332  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:04.502360  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:04.502369  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:04.502374  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:04.502378  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:04.502382  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:04 GMT
	I0916 10:57:04.502387  167544 round_trippers.go:580]     Audit-Id: d9859913-32be-47dd-a5d5-f4d902a7a085
	I0916 10:57:04.502391  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:04.502497  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"738","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6052 chars]
	I0916 10:57:04.502795  167544 pod_ready.go:93] pod "kube-proxy-qds2d" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:04.502810  167544 pod_ready.go:82] duration metric: took 400.154764ms for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:04.502819  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:04.699817  167544 request.go:632] Waited for 196.938372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:57:04.699909  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:57:04.699920  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:04.699928  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:04.699932  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:04.702399  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:04.702420  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:04.702429  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:04 GMT
	I0916 10:57:04.702436  167544 round_trippers.go:580]     Audit-Id: 7c24f4cc-0478-4ea6-8ad1-14bda4d5afb1
	I0916 10:57:04.702442  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:04.702449  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:04.702455  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:04.702460  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:04.702600  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"723","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5101 chars]
	I0916 10:57:04.900418  167544 request.go:632] Waited for 197.386321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:04.900472  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:04.900477  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:04.900484  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:04.900488  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:04.902781  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:04.902800  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:04.902809  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:04.902815  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:04.902818  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:04 GMT
	I0916 10:57:04.902822  167544 round_trippers.go:580]     Audit-Id: 8b6804e6-4ae9-4670-a797-ee7e3ab7bf8f
	I0916 10:57:04.902827  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:04.902831  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:04.903009  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:04.903331  167544 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:04.903349  167544 pod_ready.go:82] duration metric: took 400.524182ms for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:04.903360  167544 pod_ready.go:39] duration metric: took 27.712237795s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:57:04.903378  167544 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:57:04.903430  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:57:04.914540  167544 system_svc.go:56] duration metric: took 11.151058ms WaitForService to wait for kubelet
	I0916 10:57:04.914570  167544 kubeadm.go:582] duration metric: took 27.816199656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:57:04.914589  167544 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:57:05.100079  167544 request.go:632] Waited for 185.408595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:57:05.100149  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:57:05.100155  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:05.100162  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:05.100167  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:05.102717  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:05.102742  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:05.102750  167544 round_trippers.go:580]     Audit-Id: b2f9b189-a5cc-44e2-9edc-a6890ef4a09e
	I0916 10:57:05.102754  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:05.102757  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:05.102761  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:05.102764  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:05.102767  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:05 GMT
	I0916 10:57:05.103019  167544 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"806"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 20088 chars]
	I0916 10:57:05.103658  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:05.103673  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:05.103682  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:05.103686  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:05.103689  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:05.103692  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:05.103696  167544 node_conditions.go:105] duration metric: took 189.102813ms to run NodePressure ...
	I0916 10:57:05.103709  167544 start.go:241] waiting for startup goroutines ...
	I0916 10:57:05.103733  167544 start.go:255] writing updated cluster config ...
	I0916 10:57:05.106013  167544 out.go:201] 
	I0916 10:57:05.107583  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:57:05.107676  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:57:05.109372  167544 out.go:177] * Starting "multinode-026168-m03" worker node in "multinode-026168" cluster
	I0916 10:57:05.110673  167544 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:57:05.112205  167544 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:57:05.113766  167544 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:57:05.113793  167544 cache.go:56] Caching tarball of preloaded images
	I0916 10:57:05.113798  167544 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:57:05.113889  167544 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:57:05.113902  167544 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:57:05.113995  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	W0916 10:57:05.134216  167544 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:57:05.134238  167544 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:57:05.134323  167544 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:57:05.134341  167544 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:57:05.134346  167544 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:57:05.134353  167544 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:57:05.134361  167544 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:57:05.135443  167544 image.go:273] response: 
	I0916 10:57:05.191069  167544 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:57:05.191116  167544 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:57:05.191147  167544 start.go:360] acquireMachinesLock for multinode-026168-m03: {Name:mk0e4ade2c46b1f96804d4447922c9be7fcabf27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:57:05.191209  167544 start.go:364] duration metric: took 45.307µs to acquireMachinesLock for "multinode-026168-m03"
	I0916 10:57:05.191226  167544 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:57:05.191232  167544 fix.go:54] fixHost starting: m03
	I0916 10:57:05.191442  167544 cli_runner.go:164] Run: docker container inspect multinode-026168-m03 --format={{.State.Status}}
	I0916 10:57:05.208748  167544 fix.go:112] recreateIfNeeded on multinode-026168-m03: state=Stopped err=<nil>
	W0916 10:57:05.208776  167544 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:57:05.211288  167544 out.go:177] * Restarting existing docker container for "multinode-026168-m03" ...
	I0916 10:57:05.212922  167544 cli_runner.go:164] Run: docker start multinode-026168-m03
	I0916 10:57:05.488905  167544 cli_runner.go:164] Run: docker container inspect multinode-026168-m03 --format={{.State.Status}}
	I0916 10:57:05.508169  167544 kic.go:430] container "multinode-026168-m03" state is running.
	I0916 10:57:05.508686  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m03
	I0916 10:57:05.527607  167544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:57:05.527920  167544 machine.go:93] provisionDockerMachine start ...
	I0916 10:57:05.527995  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:05.546446  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:57:05.546627  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32933 <nil> <nil>}
	I0916 10:57:05.546639  167544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:57:05.547337  167544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38092->127.0.0.1:32933: read: connection reset by peer
	I0916 10:57:08.680788  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m03
	
	I0916 10:57:08.680817  167544 ubuntu.go:169] provisioning hostname "multinode-026168-m03"
	I0916 10:57:08.680892  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:08.698451  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:57:08.698636  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32933 <nil> <nil>}
	I0916 10:57:08.698649  167544 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168-m03 && echo "multinode-026168-m03" | sudo tee /etc/hostname
	I0916 10:57:08.844584  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m03
	
	I0916 10:57:08.844680  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:08.862702  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:57:08.862993  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32933 <nil> <nil>}
	I0916 10:57:08.863027  167544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:57:08.997804  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:57:08.997848  167544 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:57:08.997870  167544 ubuntu.go:177] setting up certificates
	I0916 10:57:08.997882  167544 provision.go:84] configureAuth start
	I0916 10:57:08.997962  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m03
	I0916 10:57:09.015379  167544 provision.go:143] copyHostCerts
	I0916 10:57:09.015413  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:57:09.015443  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:57:09.015453  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:57:09.015522  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:57:09.015601  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:57:09.015636  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:57:09.015644  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:57:09.015669  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:57:09.015725  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:57:09.015745  167544 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:57:09.015749  167544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:57:09.015773  167544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:57:09.015824  167544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168-m03 san=[127.0.0.1 192.168.67.4 localhost minikube multinode-026168-m03]
	I0916 10:57:09.071054  167544 provision.go:177] copyRemoteCerts
	I0916 10:57:09.071111  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:57:09.071148  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:09.088308  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m03/id_rsa Username:docker}
	I0916 10:57:09.182047  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:57:09.182114  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:57:09.205112  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:57:09.205180  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:57:09.227624  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:57:09.227710  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:57:09.250150  167544 provision.go:87] duration metric: took 252.251103ms to configureAuth
	I0916 10:57:09.250178  167544 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:57:09.250385  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:57:09.250486  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:09.267050  167544 main.go:141] libmachine: Using SSH client type: native
	I0916 10:57:09.267227  167544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32933 <nil> <nil>}
	I0916 10:57:09.267243  167544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:57:09.520648  167544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:57:09.520672  167544 machine.go:96] duration metric: took 3.992734028s to provisionDockerMachine
	I0916 10:57:09.520681  167544 start.go:293] postStartSetup for "multinode-026168-m03" (driver="docker")
	I0916 10:57:09.520691  167544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:57:09.520786  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:57:09.520838  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:09.537764  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m03/id_rsa Username:docker}
	I0916 10:57:09.633983  167544 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:57:09.636918  167544 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:57:09.636940  167544 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:57:09.636948  167544 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:57:09.636954  167544 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:57:09.636958  167544 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:57:09.636966  167544 command_runner.go:130] > ID=ubuntu
	I0916 10:57:09.636973  167544 command_runner.go:130] > ID_LIKE=debian
	I0916 10:57:09.636978  167544 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:57:09.636983  167544 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:57:09.636989  167544 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:57:09.636996  167544 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:57:09.637002  167544 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:57:09.637066  167544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:57:09.637093  167544 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:57:09.637109  167544 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:57:09.637121  167544 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:57:09.637135  167544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:57:09.637190  167544 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:57:09.637280  167544 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:57:09.637295  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:57:09.637437  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:57:09.645832  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:57:09.668337  167544 start.go:296] duration metric: took 147.640459ms for postStartSetup
	I0916 10:57:09.668414  167544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:57:09.668465  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:09.685610  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m03/id_rsa Username:docker}
	I0916 10:57:09.778088  167544 command_runner.go:130] > 31%
	I0916 10:57:09.778359  167544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:57:09.782424  167544 command_runner.go:130] > 203G
	I0916 10:57:09.782682  167544 fix.go:56] duration metric: took 4.591443822s for fixHost
	I0916 10:57:09.782709  167544 start.go:83] releasing machines lock for "multinode-026168-m03", held for 4.591488827s
	I0916 10:57:09.782780  167544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m03
	I0916 10:57:09.802950  167544 out.go:177] * Found network options:
	I0916 10:57:09.804465  167544 out.go:177]   - NO_PROXY=192.168.67.2,192.168.67.3
	W0916 10:57:09.805978  167544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:57:09.806008  167544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:57:09.806035  167544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:57:09.806053  167544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:57:09.806126  167544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:57:09.806184  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:09.806241  167544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:57:09.806305  167544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m03
	I0916 10:57:09.824616  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m03/id_rsa Username:docker}
	I0916 10:57:09.824909  167544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m03/id_rsa Username:docker}
	I0916 10:57:10.064199  167544 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:57:10.064287  167544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:57:10.068478  167544 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf.mk_disabled
	I0916 10:57:10.068509  167544 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:57:10.068519  167544 command_runner.go:130] > Device: 100002h/1048578d	Inode: 535096      Links: 1
	I0916 10:57:10.068531  167544 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:57:10.068541  167544 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:57:10.068549  167544 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:57:10.068561  167544 command_runner.go:130] > Change: 2024-09-16 10:55:04.738290142 +0000
	I0916 10:57:10.068568  167544 command_runner.go:130] >  Birth: 2024-09-16 10:55:04.734289848 +0000
	I0916 10:57:10.068627  167544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:57:10.077363  167544 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:57:10.077435  167544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:57:10.086188  167544 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:57:10.086213  167544 start.go:495] detecting cgroup driver to use...
	I0916 10:57:10.086246  167544 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:57:10.086283  167544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:57:10.097916  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:57:10.109817  167544 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:57:10.109877  167544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:57:10.122024  167544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:57:10.133222  167544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:57:10.208231  167544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:57:10.286488  167544 docker.go:233] disabling docker service ...
	I0916 10:57:10.286567  167544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:57:10.297836  167544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:57:10.308003  167544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:57:10.384360  167544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:57:10.464067  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:57:10.475865  167544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:57:10.489763  167544 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:57:10.490747  167544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:57:10.490797  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:10.500976  167544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:57:10.501045  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:10.510530  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:10.520005  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:10.529318  167544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:57:10.537950  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:10.547219  167544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:10.555838  167544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:10.564592  167544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:57:10.571988  167544 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:57:10.572060  167544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:57:10.579556  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:57:10.659935  167544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:57:10.774605  167544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:57:10.774674  167544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:57:10.777961  167544 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:57:10.777983  167544 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:57:10.777990  167544 command_runner.go:130] > Device: 10000bh/1048587d	Inode: 175         Links: 1
	I0916 10:57:10.777996  167544 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:57:10.778003  167544 command_runner.go:130] > Access: 2024-09-16 10:57:10.763560384 +0000
	I0916 10:57:10.778008  167544 command_runner.go:130] > Modify: 2024-09-16 10:57:10.763560384 +0000
	I0916 10:57:10.778015  167544 command_runner.go:130] > Change: 2024-09-16 10:57:10.763560384 +0000
	I0916 10:57:10.778019  167544 command_runner.go:130] >  Birth: -
	I0916 10:57:10.778037  167544 start.go:563] Will wait 60s for crictl version
	I0916 10:57:10.778072  167544 ssh_runner.go:195] Run: which crictl
	I0916 10:57:10.780812  167544 command_runner.go:130] > /usr/bin/crictl
	I0916 10:57:10.780921  167544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:57:10.812156  167544 command_runner.go:130] > Version:  0.1.0
	I0916 10:57:10.812176  167544 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:57:10.812181  167544 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:57:10.812186  167544 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:57:10.814172  167544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:57:10.814241  167544 ssh_runner.go:195] Run: crio --version
	I0916 10:57:10.845546  167544 command_runner.go:130] > crio version 1.24.6
	I0916 10:57:10.845573  167544 command_runner.go:130] > Version:          1.24.6
	I0916 10:57:10.845586  167544 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:57:10.845593  167544 command_runner.go:130] > GitTreeState:     clean
	I0916 10:57:10.845604  167544 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:57:10.845626  167544 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:57:10.845637  167544 command_runner.go:130] > Compiler:         gc
	I0916 10:57:10.845646  167544 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:57:10.845659  167544 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:57:10.845671  167544 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:57:10.845682  167544 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:57:10.845689  167544 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:57:10.847032  167544 ssh_runner.go:195] Run: crio --version
	I0916 10:57:10.882318  167544 command_runner.go:130] > crio version 1.24.6
	I0916 10:57:10.882340  167544 command_runner.go:130] > Version:          1.24.6
	I0916 10:57:10.882347  167544 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:57:10.882352  167544 command_runner.go:130] > GitTreeState:     clean
	I0916 10:57:10.882358  167544 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:57:10.882362  167544 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:57:10.882366  167544 command_runner.go:130] > Compiler:         gc
	I0916 10:57:10.882370  167544 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:57:10.882377  167544 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:57:10.882388  167544 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:57:10.882399  167544 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:57:10.882409  167544 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:57:10.884458  167544 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:57:10.885994  167544 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:57:10.887696  167544 out.go:177]   - env NO_PROXY=192.168.67.2,192.168.67.3
	I0916 10:57:10.889159  167544 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:57:10.906603  167544 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:57:10.910492  167544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:57:10.921390  167544 mustload.go:65] Loading cluster: multinode-026168
	I0916 10:57:10.921605  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:57:10.921807  167544 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:57:10.940372  167544 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:57:10.940641  167544 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.4
	I0916 10:57:10.940654  167544 certs.go:194] generating shared ca certs ...
	I0916 10:57:10.940669  167544 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:57:10.940810  167544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:57:10.940872  167544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:57:10.940890  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:57:10.940907  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:57:10.940923  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:57:10.940941  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:57:10.941008  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:57:10.941046  167544 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:57:10.941060  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:57:10.941096  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:57:10.941127  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:57:10.941154  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:57:10.941211  167544 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:57:10.941246  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:10.941265  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:57:10.941283  167544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:57:10.941310  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:57:10.964780  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:57:10.987586  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:57:11.010325  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:57:11.033483  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:57:11.055484  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:57:11.078442  167544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:57:11.100652  167544 ssh_runner.go:195] Run: openssl version
	I0916 10:57:11.106366  167544 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:57:11.106441  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:57:11.115567  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:57:11.119072  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:57:11.119123  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:57:11.119174  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:57:11.125526  167544 command_runner.go:130] > 3ec20f2e
	I0916 10:57:11.125595  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:57:11.134271  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:57:11.143919  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:11.147577  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:11.147636  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:11.147692  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:11.154246  167544 command_runner.go:130] > b5213941
	I0916 10:57:11.154311  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:57:11.163322  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:57:11.172673  167544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:57:11.176262  167544 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:57:11.176293  167544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:57:11.176340  167544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:57:11.182740  167544 command_runner.go:130] > 51391683
	I0916 10:57:11.182922  167544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:57:11.191576  167544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:57:11.194912  167544 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:57:11.194965  167544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:57:11.194998  167544 kubeadm.go:934] updating node {m03 192.168.67.4 0 v1.31.1  false true} ...
	I0916 10:57:11.195071  167544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:57:11.195124  167544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:57:11.202452  167544 command_runner.go:130] > kubeadm
	I0916 10:57:11.202475  167544 command_runner.go:130] > kubectl
	I0916 10:57:11.202482  167544 command_runner.go:130] > kubelet
	I0916 10:57:11.203167  167544 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:57:11.203224  167544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:57:11.212176  167544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I0916 10:57:11.229128  167544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:57:11.246056  167544 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:57:11.249691  167544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:57:11.260320  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:57:11.325648  167544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:57:11.336603  167544 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0916 10:57:11.336880  167544 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:57:11.338937  167544 out.go:177] * Verifying Kubernetes components...
	I0916 10:57:11.340399  167544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:57:11.417377  167544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:57:11.428682  167544 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:57:11.428897  167544 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:57:11.429117  167544 node_ready.go:35] waiting up to 6m0s for node "multinode-026168-m03" to be "Ready" ...
	I0916 10:57:11.429184  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:11.429192  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:11.429199  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:11.429203  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:11.431351  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:11.431371  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:11.431377  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:11.431381  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:11.431384  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:11.431387  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:11 GMT
	I0916 10:57:11.431390  167544 round_trippers.go:580]     Audit-Id: cdf481da-8f29-49bd-9842-3248cd9477c4
	I0916 10:57:11.431393  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:11.431507  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"810","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6394 chars]
	I0916 10:57:11.930235  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:11.930272  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:11.930280  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:11.930285  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:11.932558  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:11.932583  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:11.932593  167544 round_trippers.go:580]     Audit-Id: 788fafb2-48f2-4d18-b887-ec8aa2b00740
	I0916 10:57:11.932598  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:11.932603  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:11.932607  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:11.932612  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:11.932614  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:11 GMT
	I0916 10:57:11.932834  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:12.429373  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:12.429407  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:12.429415  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:12.429421  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:12.431707  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:12.431736  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:12.431752  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:12 GMT
	I0916 10:57:12.431760  167544 round_trippers.go:580]     Audit-Id: fe74272d-d623-42fd-8205-912cece8b31b
	I0916 10:57:12.431766  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:12.431772  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:12.431777  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:12.431782  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:12.431932  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:12.929959  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:12.929989  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:12.930001  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:12.930008  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:12.932371  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:12.932395  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:12.932404  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:12.932408  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:12.932413  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:12 GMT
	I0916 10:57:12.932416  167544 round_trippers.go:580]     Audit-Id: 3d5b0a9e-72d6-470a-8bb8-20928321a0ec
	I0916 10:57:12.932422  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:12.932426  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:12.932622  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:13.430256  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:13.430287  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:13.430297  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:13.430302  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:13.432783  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:13.432808  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:13.432818  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:13.432824  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:13.432828  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:13 GMT
	I0916 10:57:13.432832  167544 round_trippers.go:580]     Audit-Id: 287f40a9-17a7-4521-a52a-2116c830c24d
	I0916 10:57:13.432838  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:13.432842  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:13.433010  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:13.433365  167544 node_ready.go:53] node "multinode-026168-m03" has status "Ready":"Unknown"
	I0916 10:57:13.929590  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:13.929616  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:13.929626  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:13.929632  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:13.931834  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:13.931852  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:13.931858  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:13.931863  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:13 GMT
	I0916 10:57:13.931865  167544 round_trippers.go:580]     Audit-Id: 035bf5bb-7bf7-4439-9762-7f5e8418acbe
	I0916 10:57:13.931868  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:13.931871  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:13.931875  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:13.932031  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:14.429513  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:14.429537  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:14.429544  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:14.429549  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:14.431684  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:14.431705  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:14.431712  167544 round_trippers.go:580]     Audit-Id: 9021e2ca-1a57-45b2-9940-dba155fcae1a
	I0916 10:57:14.431715  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:14.431719  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:14.431721  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:14.431726  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:14.431729  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:14 GMT
	I0916 10:57:14.431815  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:14.929440  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:14.929469  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:14.929480  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:14.929484  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:14.931663  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:14.931687  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:14.931697  167544 round_trippers.go:580]     Audit-Id: 1b6c3653-b494-4270-88ce-2d6500ef75bb
	I0916 10:57:14.931704  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:14.931710  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:14.931715  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:14.931719  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:14.931722  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:14 GMT
	I0916 10:57:14.931855  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:15.429456  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:15.429481  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:15.429488  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:15.429491  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:15.431849  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:15.431867  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:15.431874  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:15.431877  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:15.431881  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:15.431885  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:15 GMT
	I0916 10:57:15.431888  167544 round_trippers.go:580]     Audit-Id: 1758e123-af18-4072-8c89-4349f56a1f5b
	I0916 10:57:15.431892  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:15.432037  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:15.929547  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:15.929570  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:15.929578  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:15.929582  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:15.932151  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:15.932171  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:15.932177  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:15.932181  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:15.932187  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:15.932191  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:15 GMT
	I0916 10:57:15.932195  167544 round_trippers.go:580]     Audit-Id: f7fdcd44-9352-4e28-b1e4-0ace233c6a59
	I0916 10:57:15.932200  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:15.932359  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:15.932679  167544 node_ready.go:53] node "multinode-026168-m03" has status "Ready":"Unknown"
	I0916 10:57:16.430127  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:16.430155  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:16.430167  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:16.430174  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:16.432506  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:16.432533  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:16.432542  167544 round_trippers.go:580]     Audit-Id: 94958ec7-ee21-417b-b903-3d4f6c7b51e1
	I0916 10:57:16.432549  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:16.432554  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:16.432557  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:16.432562  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:16.432567  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:16 GMT
	I0916 10:57:16.432694  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:16.929350  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:16.929378  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:16.929388  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:16.929394  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:16.931705  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:16.931726  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:16.931733  167544 round_trippers.go:580]     Audit-Id: 56b4ea98-27cf-4028-a7e9-946635364087
	I0916 10:57:16.931737  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:16.931740  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:16.931744  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:16.931747  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:16.931750  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:16 GMT
	I0916 10:57:16.931940  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:17.429482  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:17.429527  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:17.429537  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:17.429543  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:17.432278  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:17.432298  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:17.432304  167544 round_trippers.go:580]     Audit-Id: 3a3cc3fd-ed1a-49bb-8b7c-0a04738646c4
	I0916 10:57:17.432309  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:17.432313  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:17.432317  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:17.432323  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:17.432328  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:17 GMT
	I0916 10:57:17.432572  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:17.929321  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:17.929355  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:17.929363  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:17.929370  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:17.931696  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:17.931720  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:17.931730  167544 round_trippers.go:580]     Audit-Id: f31d7bc4-032b-4f6a-822b-862bfa1927d2
	I0916 10:57:17.931738  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:17.931743  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:17.931747  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:17.931753  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:17.931757  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:17 GMT
	I0916 10:57:17.931905  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:18.429510  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:18.429533  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.429541  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.429547  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.431871  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:18.431899  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.431906  167544 round_trippers.go:580]     Audit-Id: dcc3603b-5ca8-474e-9001-92ad2b9ac574
	I0916 10:57:18.431910  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.431913  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.431917  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.431919  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.431922  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.432081  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"816","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6491 chars]
	I0916 10:57:18.432388  167544 node_ready.go:53] node "multinode-026168-m03" has status "Ready":"Unknown"
	I0916 10:57:18.929492  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:18.929520  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.929528  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.929534  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.931791  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:18.931808  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.931814  167544 round_trippers.go:580]     Audit-Id: d39bb11a-4039-4862-bdf8-676b08433001
	I0916 10:57:18.931818  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.931821  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.931824  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.931828  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.931830  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.932058  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"828","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6096 chars]
	I0916 10:57:18.932364  167544 node_ready.go:49] node "multinode-026168-m03" has status "Ready":"True"
	I0916 10:57:18.932379  167544 node_ready.go:38] duration metric: took 7.503248493s for node "multinode-026168-m03" to be "Ready" ...
	I0916 10:57:18.932388  167544 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:57:18.932439  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:57:18.932447  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.932453  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.932457  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.935172  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:18.935199  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.935207  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.935211  167544 round_trippers.go:580]     Audit-Id: 2a682d00-7f71-477f-9769-74bbf3bc23b1
	I0916 10:57:18.935217  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.935226  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.935231  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.935236  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.936259  167544 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"828"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"798","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 91118 chars]
	I0916 10:57:18.940994  167544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.941086  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:57:18.941098  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.941109  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.941113  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.942976  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.942992  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.942998  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.943001  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.943004  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.943007  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.943010  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.943013  167544 round_trippers.go:580]     Audit-Id: f6211dba-5bdc-4de4-87d6-505c0dc02bf3
	I0916 10:57:18.943145  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"798","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:57:18.943539  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:18.943553  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.943560  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.943565  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.945123  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.945138  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.945147  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.945153  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.945158  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.945162  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.945167  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.945171  167544 round_trippers.go:580]     Audit-Id: 075d6016-0cb4-401e-81b1-8838a29f417c
	I0916 10:57:18.945340  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:18.945667  167544 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:18.945682  167544 pod_ready.go:82] duration metric: took 4.660183ms for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.945689  167544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.945745  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:57:18.945753  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.945761  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.945765  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.947365  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.947383  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.947389  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.947392  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.947396  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.947400  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.947408  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.947413  167544 round_trippers.go:580]     Audit-Id: 3ff8e003-64a0-4507-bb06-ab59a8ac991b
	I0916 10:57:18.947545  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"724","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6575 chars]
	I0916 10:57:18.947926  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:18.947938  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.947945  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.947950  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.949301  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.949317  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.949324  167544 round_trippers.go:580]     Audit-Id: 8275b6e4-897a-4600-abcc-f1607e0a1c9e
	I0916 10:57:18.949328  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.949351  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.949357  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.949364  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.949369  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.949525  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:18.949793  167544 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:18.949806  167544 pod_ready.go:82] duration metric: took 4.111318ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.949820  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.949866  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:57:18.949873  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.949880  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.949885  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.951331  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.951347  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.951354  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.951357  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.951360  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.951363  167544 round_trippers.go:580]     Audit-Id: 0faeb1ca-d0a9-4190-8db9-5c0ec5b5795d
	I0916 10:57:18.951369  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.951374  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.951603  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"732","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 9107 chars]
	I0916 10:57:18.951964  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:18.951975  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.951981  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.951984  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.953376  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.953393  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.953402  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.953407  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.953412  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.953416  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.953422  167544 round_trippers.go:580]     Audit-Id: f9b75f02-2b27-421c-a899-9a2c054f8f21
	I0916 10:57:18.953428  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.953548  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:18.953864  167544 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:18.953878  167544 pod_ready.go:82] duration metric: took 4.053194ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.953892  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.953939  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:57:18.953946  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.953952  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.953958  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.955428  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.955449  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.955455  167544 round_trippers.go:580]     Audit-Id: b89038b9-55f6-42d1-b311-43e0b512907a
	I0916 10:57:18.955462  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.955468  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.955473  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.955478  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.955488  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.955652  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"725","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8897 chars]
	I0916 10:57:18.956021  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:18.956032  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:18.956039  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:18.956043  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:18.957418  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:18.957437  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:18.957446  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:18 GMT
	I0916 10:57:18.957453  167544 round_trippers.go:580]     Audit-Id: 34e99e08-7a4b-4541-ba77-4aead43fdd13
	I0916 10:57:18.957459  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:18.957466  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:18.957470  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:18.957479  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:18.957586  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:18.957846  167544 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:18.957859  167544 pod_ready.go:82] duration metric: took 3.957917ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:18.957868  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:19.130265  167544 request.go:632] Waited for 172.325122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:57:19.130332  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:57:19.130337  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:19.130344  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:19.130351  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:19.132572  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:19.132595  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:19.132604  167544 round_trippers.go:580]     Audit-Id: 8cd2fd83-9505-4872-87bf-f714b374887b
	I0916 10:57:19.132610  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:19.132615  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:19.132618  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:19.132621  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:19.132625  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:19 GMT
	I0916 10:57:19.132760  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"711","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:57:19.330481  167544 request.go:632] Waited for 197.280768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:19.330566  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:19.330574  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:19.330583  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:19.330589  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:19.332756  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:19.332778  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:19.332785  167544 round_trippers.go:580]     Audit-Id: 766e37d1-ab46-46bc-9a7a-884176b97841
	I0916 10:57:19.332789  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:19.332794  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:19.332799  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:19.332805  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:19.332810  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:19 GMT
	I0916 10:57:19.332941  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:19.333247  167544 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:19.333261  167544 pod_ready.go:82] duration metric: took 375.388564ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:19.333270  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:19.529777  167544 request.go:632] Waited for 196.444904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:19.529833  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:19.529838  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:19.529845  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:19.529849  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:19.532399  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:19.532421  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:19.532428  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:19.532433  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:19.532437  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:19.532441  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:19 GMT
	I0916 10:57:19.532446  167544 round_trippers.go:580]     Audit-Id: 725006ba-7333-4734-808e-42f3411ce087
	I0916 10:57:19.532450  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:19.532571  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:19.730360  167544 request.go:632] Waited for 197.339458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:19.730425  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:19.730435  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:19.730445  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:19.730448  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:19.732684  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:19.732707  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:19.732717  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:19.732722  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:19.732728  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:19.732734  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:19 GMT
	I0916 10:57:19.732739  167544 round_trippers.go:580]     Audit-Id: f5b32454-8ce6-4496-97d8-3b38d6f2211c
	I0916 10:57:19.732743  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:19.732868  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"828","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6096 chars]
	I0916 10:57:19.930455  167544 request.go:632] Waited for 96.312829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:19.930516  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:19.930521  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:19.930529  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:19.930532  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:19.932962  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:19.932983  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:19.932994  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:19.933002  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:19.933009  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:19.933014  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:19.933019  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:19 GMT
	I0916 10:57:19.933024  167544 round_trippers.go:580]     Audit-Id: cf02f3a9-7a81-4249-ae95-6e5b24de9f8c
	I0916 10:57:19.933216  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:20.130067  167544 request.go:632] Waited for 196.358138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:20.130143  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:20.130148  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:20.130155  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:20.130158  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:20.132686  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:20.132703  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:20.132710  167544 round_trippers.go:580]     Audit-Id: d4f652c4-c3e9-4fa7-a2a5-e44513abc8a8
	I0916 10:57:20.132714  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:20.132718  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:20.132722  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:20.132725  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:20.132729  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:20 GMT
	I0916 10:57:20.132880  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"828","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6096 chars]
	I0916 10:57:20.334263  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:20.334285  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:20.334297  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:20.334301  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:20.336460  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:20.336479  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:20.336486  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:20.336489  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:20.336498  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:20.336502  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:20 GMT
	I0916 10:57:20.336506  167544 round_trippers.go:580]     Audit-Id: 2db9474b-5f65-40cc-a480-87f2cb567b07
	I0916 10:57:20.336510  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:20.336686  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:20.530512  167544 request.go:632] Waited for 193.390589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:20.530589  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:20.530595  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:20.530602  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:20.530608  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:20.532830  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:20.532854  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:20.532864  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:20.532870  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:20.532875  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:20 GMT
	I0916 10:57:20.532879  167544 round_trippers.go:580]     Audit-Id: c133f6fa-4330-4adc-8d09-0fcf5b801a40
	I0916 10:57:20.532882  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:20.532887  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:20.533004  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"828","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6096 chars]
	I0916 10:57:20.833480  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:20.833504  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:20.833512  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:20.833517  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:20.835749  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:20.835773  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:20.835780  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:20 GMT
	I0916 10:57:20.835784  167544 round_trippers.go:580]     Audit-Id: 616e4ef1-f887-4674-9ded-11573a8a17aa
	I0916 10:57:20.835787  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:20.835790  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:20.835793  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:20.835795  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:20.836040  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:20.929816  167544 request.go:632] Waited for 93.318272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:20.929915  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:20.929927  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:20.929937  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:20.929950  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:20.932390  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:20.932414  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:20.932423  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:20.932427  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:20 GMT
	I0916 10:57:20.932431  167544 round_trippers.go:580]     Audit-Id: abde34f4-cea8-443a-9a1b-f3dd2a78ac1e
	I0916 10:57:20.932436  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:20.932440  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:20.932445  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:20.932557  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"828","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6096 chars]
	I0916 10:57:21.334167  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:21.334189  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:21.334197  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:21.334201  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:21.336439  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:21.336465  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:21.336475  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:21.336480  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:21.336484  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:21 GMT
	I0916 10:57:21.336489  167544 round_trippers.go:580]     Audit-Id: 7db258d4-f410-4fae-b5c6-6ad0e27a5fc9
	I0916 10:57:21.336493  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:21.336497  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:21.336626  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:21.337098  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:21.337112  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:21.337119  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:21.337124  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:21.338872  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:21.338887  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:21.338893  167544 round_trippers.go:580]     Audit-Id: 9f6d9331-d497-438c-9062-d309e735a541
	I0916 10:57:21.338898  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:21.338902  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:21.338911  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:21.338918  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:21.338922  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:21 GMT
	I0916 10:57:21.339072  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"828","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6096 chars]
	I0916 10:57:21.339395  167544 pod_ready.go:103] pod "kube-proxy-g86bs" in "kube-system" namespace has status "Ready":"False"
	I0916 10:57:21.833504  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:21.833526  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:21.833535  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:21.833540  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:21.837280  167544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:57:21.837304  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:21.837314  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:21 GMT
	I0916 10:57:21.837319  167544 round_trippers.go:580]     Audit-Id: 46fe4bc9-1e0b-474b-bd25-4272c41787a0
	I0916 10:57:21.837324  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:21.837403  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:21.837415  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:21.837421  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:21.837545  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:21.838006  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:21.838020  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:21.838027  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:21.838031  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:21.839771  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:21.839787  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:21.839796  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:21 GMT
	I0916 10:57:21.839799  167544 round_trippers.go:580]     Audit-Id: 251ab47a-4242-42ab-8396-6a31ce809267
	I0916 10:57:21.839802  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:21.839806  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:21.839808  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:21.839816  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:21.839908  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:22.333529  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:22.333551  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:22.333559  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:22.333563  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:22.336289  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:22.336307  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:22.336314  167544 round_trippers.go:580]     Audit-Id: 45b659c6-dede-4c1e-a618-1da33bf14b2d
	I0916 10:57:22.336317  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:22.336321  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:22.336325  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:22.336330  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:22.336334  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:22 GMT
	I0916 10:57:22.336473  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:22.336930  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:22.336946  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:22.336953  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:22.336957  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:22.338734  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:22.338753  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:22.338761  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:22.338765  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:22.338770  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:22.338775  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:22 GMT
	I0916 10:57:22.338784  167544 round_trippers.go:580]     Audit-Id: 28c27743-eb0e-47b4-97dd-52330f638a53
	I0916 10:57:22.338789  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:22.338894  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:22.833505  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:22.833544  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:22.833553  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:22.833558  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:22.835853  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:22.835874  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:22.835880  167544 round_trippers.go:580]     Audit-Id: 54cdaa2a-325e-4580-8cd3-75094f6be993
	I0916 10:57:22.835884  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:22.835893  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:22.835897  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:22.835901  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:22.835907  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:22 GMT
	I0916 10:57:22.836090  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:22.836558  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:22.836579  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:22.836592  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:22.836597  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:22.838429  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:22.838445  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:22.838451  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:22.838455  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:22.838458  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:22 GMT
	I0916 10:57:22.838461  167544 round_trippers.go:580]     Audit-Id: 03135a50-227d-4f6d-b96d-4799db9d6522
	I0916 10:57:22.838463  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:22.838466  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:22.838605  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:23.334244  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:23.334269  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:23.334277  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:23.334280  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:23.336726  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:23.336747  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:23.336754  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:23.336758  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:23.336761  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:23 GMT
	I0916 10:57:23.336764  167544 round_trippers.go:580]     Audit-Id: 0027f13b-3e51-4f73-9c0c-df80f8d33e14
	I0916 10:57:23.336766  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:23.336772  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:23.336944  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:23.337435  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:23.337450  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:23.337457  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:23.337461  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:23.339434  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:23.339446  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:23.339451  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:23.339455  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:23 GMT
	I0916 10:57:23.339458  167544 round_trippers.go:580]     Audit-Id: db88dd91-4f70-427b-800c-5cf46975ec85
	I0916 10:57:23.339460  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:23.339463  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:23.339466  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:23.339617  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:23.339899  167544 pod_ready.go:103] pod "kube-proxy-g86bs" in "kube-system" namespace has status "Ready":"False"
	I0916 10:57:23.834288  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:23.834309  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:23.834318  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:23.834324  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:23.836332  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:23.836357  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:23.836368  167544 round_trippers.go:580]     Audit-Id: 0e2319c8-356d-431a-a5e1-18e0848a8e53
	I0916 10:57:23.836373  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:23.836377  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:23.836380  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:23.836383  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:23.836388  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:23 GMT
	I0916 10:57:23.836585  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:23.837106  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:23.837124  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:23.837131  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:23.837134  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:23.839060  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:23.839078  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:23.839085  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:23.839089  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:23.839093  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:23.839098  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:23.839102  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:23 GMT
	I0916 10:57:23.839106  167544 round_trippers.go:580]     Audit-Id: 5b97e246-3d2b-454f-ad31-cabe9cfb63ac
	I0916 10:57:23.839204  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:24.333501  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:24.333528  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:24.333536  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:24.333542  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:24.335602  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:24.335635  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:24.335642  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:24 GMT
	I0916 10:57:24.335646  167544 round_trippers.go:580]     Audit-Id: ec6cdf11-2436-4156-a00c-1008b34cd5a6
	I0916 10:57:24.335650  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:24.335653  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:24.335656  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:24.335659  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:24.335793  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:24.336277  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:24.336293  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:24.336300  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:24.336303  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:24.338093  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:24.338114  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:24.338124  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:24.338130  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:24 GMT
	I0916 10:57:24.338136  167544 round_trippers.go:580]     Audit-Id: 67d58345-0e30-4cbd-8a5a-d102a922a316
	I0916 10:57:24.338140  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:24.338144  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:24.338149  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:24.338254  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:24.834099  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:24.834131  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:24.834140  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:24.834145  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:24.836594  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:24.836619  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:24.836628  167544 round_trippers.go:580]     Audit-Id: f72ff7c7-c68c-4f60-95d9-e4c8d70c73ce
	I0916 10:57:24.836632  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:24.836637  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:24.836641  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:24.836645  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:24.836648  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:24 GMT
	I0916 10:57:24.836768  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"809","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6403 chars]
	I0916 10:57:24.837206  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:24.837222  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:24.837231  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:24.837240  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:24.840385  167544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:57:24.840409  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:24.840418  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:24.840423  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:24.840428  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:24 GMT
	I0916 10:57:24.840432  167544 round_trippers.go:580]     Audit-Id: 78710cf3-0dc0-4e53-b10b-3a270617c42a
	I0916 10:57:24.840436  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:24.840443  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:24.840543  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:25.334218  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:25.334240  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:25.334248  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:25.334252  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:25.336414  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:25.336435  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:25.336443  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:25.336448  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:25.336453  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:25.336460  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:25 GMT
	I0916 10:57:25.336468  167544 round_trippers.go:580]     Audit-Id: c96371a3-075a-4d89-a32a-f88075d76a20
	I0916 10:57:25.336474  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:25.336631  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"862","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6651 chars]
	I0916 10:57:25.337111  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:25.337126  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:25.337134  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:25.337138  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:25.339022  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:25.339039  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:25.339046  167544 round_trippers.go:580]     Audit-Id: 12794342-cc05-4dfb-bb8e-ec7c9573304d
	I0916 10:57:25.339051  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:25.339054  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:25.339058  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:25.339062  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:25.339065  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:25 GMT
	I0916 10:57:25.339219  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:25.833527  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:25.833550  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:25.833558  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:25.833562  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:25.835792  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:25.835813  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:25.835822  167544 round_trippers.go:580]     Audit-Id: a0d607d1-9b6c-48bd-a34c-b8eb88ba8187
	I0916 10:57:25.835827  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:25.835831  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:25.835836  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:25.835840  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:25.835845  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:25 GMT
	I0916 10:57:25.836082  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"862","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6651 chars]
	I0916 10:57:25.836534  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:25.836549  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:25.836559  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:25.836565  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:25.838221  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:25.838239  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:25.838245  167544 round_trippers.go:580]     Audit-Id: cc082d49-aaeb-4a3c-98f7-67bd2ca3966b
	I0916 10:57:25.838249  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:25.838252  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:25.838255  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:25.838257  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:25.838260  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:25 GMT
	I0916 10:57:25.838389  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:25.838693  167544 pod_ready.go:103] pod "kube-proxy-g86bs" in "kube-system" namespace has status "Ready":"False"
	I0916 10:57:26.334084  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:57:26.334107  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:26.334114  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:26.334117  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:26.336385  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:26.336405  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:26.336411  167544 round_trippers.go:580]     Audit-Id: 456c29a6-bd3f-45eb-9763-5b75ea6e9daa
	I0916 10:57:26.336416  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:26.336419  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:26.336421  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:26.336424  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:26.336427  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:26 GMT
	I0916 10:57:26.336593  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"871","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:57:26.337139  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:57:26.337156  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:26.337167  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:26.337175  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:26.338846  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:26.338876  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:26.338886  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:26.338895  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:26.338900  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:26.338907  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:26.338913  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:26 GMT
	I0916 10:57:26.338923  167544 round_trippers.go:580]     Audit-Id: 9f1ec749-c902-4adb-a472-92138c693226
	I0916 10:57:26.339035  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m03","uid":"3def10ba-4e73-469d-92e9-197921c49326","resourceVersion":"855","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_55_07_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 5974 chars]
	I0916 10:57:26.339304  167544 pod_ready.go:93] pod "kube-proxy-g86bs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:26.339317  167544 pod_ready.go:82] duration metric: took 7.006041943s for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:26.339327  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:26.339372  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:57:26.339379  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:26.339388  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:26.339395  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:26.341032  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:26.341052  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:26.341061  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:26.341068  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:26 GMT
	I0916 10:57:26.341074  167544 round_trippers.go:580]     Audit-Id: 17a59b11-9899-46bb-93ca-82f67d63e0b7
	I0916 10:57:26.341083  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:26.341089  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:26.341092  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:26.341210  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qds2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac30bd54-b932-4f52-a53c-4edbc5eefc7c","resourceVersion":"784","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:57:26.341732  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:57:26.341750  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:26.341760  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:26.341764  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:26.343268  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:26.343291  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:26.343301  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:26.343308  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:26.343312  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:26.343316  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:26.343325  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:26 GMT
	I0916 10:57:26.343333  167544 round_trippers.go:580]     Audit-Id: 33dde0e0-30ed-482b-9e6e-c38346819ec3
	I0916 10:57:26.343438  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"738","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6052 chars]
	I0916 10:57:26.343828  167544 pod_ready.go:93] pod "kube-proxy-qds2d" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:26.343847  167544 pod_ready.go:82] duration metric: took 4.514211ms for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:26.343860  167544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:26.343929  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:57:26.343939  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:26.343945  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:26.343950  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:26.345573  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:26.345593  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:26.345603  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:26.345608  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:26.345613  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:26.345617  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:26 GMT
	I0916 10:57:26.345625  167544 round_trippers.go:580]     Audit-Id: 9d9f2d14-1118-4735-ac2d-0785a307329b
	I0916 10:57:26.345630  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:26.345740  167544 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"723","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5101 chars]
	I0916 10:57:26.346093  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:57:26.346110  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:26.346119  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:26.346134  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:26.347638  167544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:57:26.347653  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:26.347661  167544 round_trippers.go:580]     Audit-Id: de667e50-8973-406a-b099-ca4b41faf3b1
	I0916 10:57:26.347665  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:26.347670  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:26.347675  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:26.347679  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:26.347685  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:26 GMT
	I0916 10:57:26.347779  167544 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:57:26.348079  167544 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:26.348096  167544 pod_ready.go:82] duration metric: took 4.224701ms for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:26.348109  167544 pod_ready.go:39] duration metric: took 7.415712367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:57:26.348128  167544 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:57:26.348182  167544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:57:26.359714  167544 system_svc.go:56] duration metric: took 11.578982ms WaitForService to wait for kubelet
	I0916 10:57:26.359748  167544 kubeadm.go:582] duration metric: took 15.023094107s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:57:26.359765  167544 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:57:26.359852  167544 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:57:26.359862  167544 round_trippers.go:469] Request Headers:
	I0916 10:57:26.359869  167544 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:26.359877  167544 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:26.362482  167544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:26.362505  167544 round_trippers.go:577] Response Headers:
	I0916 10:57:26.362513  167544 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:26.362517  167544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:57:26.362520  167544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:57:26.362524  167544 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:26 GMT
	I0916 10:57:26.362527  167544 round_trippers.go:580]     Audit-Id: 589fb60f-14a3-40a4-88c9-43c5726f2a8b
	I0916 10:57:26.362530  167544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:26.362988  167544 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"874"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 20327 chars]
	I0916 10:57:26.363853  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:26.363882  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:26.363896  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:26.363906  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:26.363912  167544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:26.363920  167544 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:26.363926  167544 node_conditions.go:105] duration metric: took 4.154994ms to run NodePressure ...
	I0916 10:57:26.363942  167544 start.go:241] waiting for startup goroutines ...
	I0916 10:57:26.363969  167544 start.go:255] writing updated cluster config ...
	I0916 10:57:26.364312  167544 ssh_runner.go:195] Run: rm -f paused
	I0916 10:57:26.371346  167544 out.go:177] * Done! kubectl is now configured to use "multinode-026168" cluster and "default" namespace by default
	E0916 10:57:26.372654  167544 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:56:24 multinode-026168 crio[665]: time="2024-09-16 10:56:24.611004591Z" level=info msg="Started container" PID=1377 containerID=2c744a4617936970c0f015163ce27175105b34aa74e00a1d86e014f8d0322fa1 description=default/busybox-7dff88458-qt9rx/busybox id=1b08746a-1a25-4a02-b5fe-01a300dd3fba name=/runtime.v1.RuntimeService/StartContainer sandboxID=6658d696060b59c4a0ab7de19dbc4e7b5fd3523df7836a4aa0abb14d68a298cb
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.097718994Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.101882278Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.101927233Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.101947265Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.105415238Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.105454640Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.105467798Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.109072852Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.109114787Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.109146226Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.112765685Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:56:35 multinode-026168 crio[665]: time="2024-09-16 10:56:35.112792411Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:56:54 multinode-026168 conmon[1255]: conmon 098b269138c39f3dfbf3 <ninfo>: container 1277 exited with status 1
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.099124088Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a056272f-9c08-4896-8fbf-1c077dbed4e2 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.099348141Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a056272f-9c08-4896-8fbf-1c077dbed4e2 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.099998315Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=14b4d0f9-e2f8-4f0d-befa-5af97952b9c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.100194203Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=14b4d0f9-e2f8-4f0d-befa-5af97952b9c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.100846764Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8548c1f1-b018-498d-83ec-9b9aa3a49c46 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.100950216Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.112015974Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e6f59bacda6da6260e38f24f5e94703e4e64d9b6dc8a1cf9f9f5c56f73d53446/merged/etc/passwd: no such file or directory"
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.112051659Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e6f59bacda6da6260e38f24f5e94703e4e64d9b6dc8a1cf9f9f5c56f73d53446/merged/etc/group: no such file or directory"
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.144058869Z" level=info msg="Created container 0f875bed4f3f6cb12db510c923b72a1435fc03a3309389fca0ce9d83ebb1101c: kube-system/storage-provisioner/storage-provisioner" id=8548c1f1-b018-498d-83ec-9b9aa3a49c46 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.144629528Z" level=info msg="Starting container: 0f875bed4f3f6cb12db510c923b72a1435fc03a3309389fca0ce9d83ebb1101c" id=9e749c66-d66a-45bd-8be5-c8a902bb0a43 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:56:55 multinode-026168 crio[665]: time="2024-09-16 10:56:55.151080794Z" level=info msg="Started container" PID=1694 containerID=0f875bed4f3f6cb12db510c923b72a1435fc03a3309389fca0ce9d83ebb1101c description=kube-system/storage-provisioner/storage-provisioner id=9e749c66-d66a-45bd-8be5-c8a902bb0a43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4f7b4288fa26e4d12f23fb43674ed056809924b6b2923913a3ad1f91f91bb2d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0f875bed4f3f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   37 seconds ago       Running             storage-provisioner       2                   e4f7b4288fa26       storage-provisioner
	46f16f2cb8f23       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Running             coredns                   1                   2978fd6dbb0ca       coredns-7c65d6cfc9-s82cx
	2c744a4617936       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   About a minute ago   Running             busybox                   1                   6658d696060b5       busybox-7dff88458-qt9rx
	098b269138c39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   e4f7b4288fa26       storage-provisioner
	51723ce420ed2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Running             kube-proxy                1                   07910d0788bf1       kube-proxy-6p6vt
	a2987df8460af       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   About a minute ago   Running             kindnet-cni               1                   d7bb048ac856b       kindnet-zv2p5
	feeffd9edfa13       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   b3901e4a2318a       etcd-multinode-026168
	3311dd85c0368       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Running             kube-apiserver            1                   96e812c8c9034       kube-apiserver-multinode-026168
	12489e6b16e45       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Running             kube-controller-manager   1                   73999c29094eb       kube-controller-manager-multinode-026168
	ff2904830643e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Running             kube-scheduler            1                   93f3ef9728095       kube-scheduler-multinode-026168
	
	
	==> coredns [46f16f2cb8f23cc4bb2b1a1c1ece0c4c1893a0c4d513ab392d9b2459637ba73b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38986 - 32786 "HINFO IN 7798351196830728577.4894156367820129815. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017159274s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1005945273]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:56:24.641) (total time: 30000ms):
	Trace[1005945273]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:56:54.641)
	Trace[1005945273]: [30.00061645s] [30.00061645s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[186645190]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:56:24.641) (total time: 30000ms):
	Trace[186645190]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:56:54.641)
	Trace[186645190]: [30.000695136s] [30.000695136s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[24545538]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:56:24.641) (total time: 30000ms):
	Trace[24545538]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:56:54.641)
	Trace[24545538]: [30.000792616s] [30.000792616s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               multinode-026168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_53_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:53:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:57:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:56:23 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:56:23 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:56:23 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:56:23 +0000   Mon, 16 Sep 2024 10:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-026168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffda8b3697164dce8e3950e65b8e3773
	  System UUID:                8db2fd04-b5e4-4ec7-8d8e-d94280ac94a3
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qt9rx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 coredns-7c65d6cfc9-s82cx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m53s
	  kube-system                 etcd-multinode-026168                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m58s
	  kube-system                 kindnet-zv2p5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m53s
	  kube-system                 kube-apiserver-multinode-026168             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-controller-manager-multinode-026168    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-proxy-6p6vt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 kube-scheduler-multinode-026168             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3m52s              kube-proxy       
	  Normal   Starting                 68s                kube-proxy       
	  Normal   NodeHasSufficientPID     3m58s              kubelet          Node multinode-026168 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 3m58s              kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m58s              kubelet          Node multinode-026168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m58s              kubelet          Node multinode-026168 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 3m58s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           3m54s              node-controller  Node multinode-026168 event: Registered Node multinode-026168 in Controller
	  Normal   NodeReady                3m12s              kubelet          Node multinode-026168 status is now: NodeReady
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node multinode-026168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node multinode-026168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node multinode-026168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           66s                node-controller  Node multinode-026168 event: Registered Node multinode-026168 in Controller
	
	
	Name:               multinode-026168-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_54_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:54:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:57:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:56:44 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:56:44 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:56:44 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:56:44 +0000   Mon, 16 Sep 2024 10:54:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-026168-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 647aed8cbd0a4c65af55050b8cc66cae
	  System UUID:                50f4fbf1-c6a3-4700-a79b-bb8841197877
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z8csk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kindnet-mckv5              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m56s
	  kube-system                 kube-proxy-qds2d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 42s                    kube-proxy       
	  Normal   Starting                 2m54s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m56s (x2 over 2m57s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m56s (x2 over 2m57s)  kubelet          Node multinode-026168-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m56s (x2 over 2m57s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m54s                  node-controller  Node multinode-026168-m02 event: Registered Node multinode-026168-m02 in Controller
	  Normal   NodeReady                2m44s                  kubelet          Node multinode-026168-m02 status is now: NodeReady
	  Normal   RegisteredNode           66s                    node-controller  Node multinode-026168-m02 event: Registered Node multinode-026168-m02 in Controller
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     54s (x7 over 61s)      kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  48s (x8 over 61s)      kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 61s)      kubelet          Node multinode-026168-m02 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +1.011853] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000006] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.011869] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000006] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  -0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000027] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.223643] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000010] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.004008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000255] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.187075] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000006] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000010] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [feeffd9edfa13e02379af1b5059cada9dcb6a441c0270e5e9549d22440979ce9] <==
	{"level":"info","ts":"2024-09-16T10:56:20.302897Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:56:20.303057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:20.303105Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:20.303150Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:20.306126Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:56:20.306245Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:20.306292Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:20.306416Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:56:20.306463Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:56:22.194548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.194604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.194641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.194655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:56:22.194661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T10:56:22.194671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:56:22.194679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T10:56:22.197462Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.197478Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.197708Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.197453Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-026168 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:56:22.197739Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.198699Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.198748Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.200463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:56:22.200486Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 10:57:33 up 39 min,  0 users,  load average: 1.14, 1.25, 1.02
	Linux multinode-026168 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [a2987df8460af03d8e26f64f604ab666664283d6f66cdffd3138a2499e363aa1] <==
	I0916 10:56:45.094039       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:56:55.096097       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:56:55.096181       1 main.go:299] handling current node
	I0916 10:56:55.096213       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:56:55.096223       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:56:55.096362       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:56:55.096375       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:05.093792       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:05.093829       1 main.go:299] handling current node
	I0916 10:57:05.093848       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:05.093855       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:05.094016       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:05.094028       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:15.095954       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:15.096020       1 main.go:299] handling current node
	I0916 10:57:15.096039       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:15.096045       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:15.096227       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:15.096237       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:25.094399       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:25.094453       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:25.094616       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:25.094631       1 main.go:322] Node multinode-026168-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:25.094694       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:25.094707       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3311dd85c036898f28c133ab5659e93e447fc86d8abd691077ad3f47d17320dc] <==
	I0916 10:56:23.140251       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0916 10:56:23.140260       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0916 10:56:23.139483       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0916 10:56:23.141142       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0916 10:56:23.293795       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:56:23.293801       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:56:23.293867       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:56:23.294730       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:56:23.294857       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:56:23.294879       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:56:23.294888       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:56:23.294895       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:56:23.294999       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:56:23.293955       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:56:23.295178       1 policy_source.go:224] refreshing policies
	I0916 10:56:23.295630       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:56:23.294069       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:56:23.294129       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:56:23.294158       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:56:23.302329       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:56:23.316553       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:56:23.401000       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:56:24.142482       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:56:26.734589       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:56:26.920463       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [12489e6b16e454c85029c92df3d00feafad1ca15afcb8c8f2a921a7448cabee4] <==
	I0916 10:56:26.767633       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:56:26.768340       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:56:26.771831       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:56:27.183434       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:56:27.230150       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:56:27.230187       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:56:44.476312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m02"
	I0916 10:56:49.737016       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.110824ms"
	I0916 10:56:49.737117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.532µs"
	I0916 10:56:50.773640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.348297ms"
	I0916 10:56:50.773732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.68µs"
	I0916 10:57:03.289627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.917514ms"
	I0916 10:57:03.289761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.345µs"
	I0916 10:57:06.780658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:06.781018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-026168-m02"
	I0916 10:57:06.791063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:11.886847       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:18.839880       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-026168-m03"
	I0916 10:57:18.839904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:18.848862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:21.808673       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:26.902122       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:26.910777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:27.493666       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m03"
	I0916 10:57:27.493675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-026168-m02"
	
	
	==> kube-proxy [51723ce420ed25cc89d068270d21e2c143818e129a7d8caf82c23b23e5ec1643] <==
	I0916 10:56:24.617481       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:56:24.744904       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:56:24.744973       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:56:24.763036       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:56:24.763087       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:56:24.764964       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:56:24.765328       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:56:24.765401       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:56:24.766664       1 config.go:199] "Starting service config controller"
	I0916 10:56:24.766754       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:56:24.766676       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:56:24.766804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:56:24.766695       1 config.go:328] "Starting node config controller"
	I0916 10:56:24.766818       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:56:24.866944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:56:24.866997       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:56:24.867017       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ff2904830643e040379478f6ffa8f470bfcff836e175901630632dfbee3daa23] <==
	I0916 10:56:20.601020       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:56:23.210249       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:56:23.210392       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:56:23.210470       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:56:23.210486       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:56:23.300794       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:56:23.300911       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:56:23.307944       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:56:23.313235       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:56:23.313799       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:56:23.313879       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:56:23.414954       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:56:23 multinode-026168 kubelet[813]: I0916 10:56:23.995591     813 apiserver.go:52] "Watching apiserver"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.098835     813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.115908     813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e993dc5-3e51-407a-96f0-81c74274fb7c-xtables-lock\") pod \"kindnet-zv2p5\" (UID: \"9e993dc5-3e51-407a-96f0-81c74274fb7c\") " pod="kube-system/kindnet-zv2p5"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.115960     813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42162ba1-cb61-4a95-acc5-5c4c5f3ead8c-lib-modules\") pod \"kube-proxy-6p6vt\" (UID: \"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c\") " pod="kube-system/kube-proxy-6p6vt"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.115976     813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9e993dc5-3e51-407a-96f0-81c74274fb7c-cni-cfg\") pod \"kindnet-zv2p5\" (UID: \"9e993dc5-3e51-407a-96f0-81c74274fb7c\") " pod="kube-system/kindnet-zv2p5"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.116003     813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7-tmp\") pod \"storage-provisioner\" (UID: \"ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.116058     813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42162ba1-cb61-4a95-acc5-5c4c5f3ead8c-xtables-lock\") pod \"kube-proxy-6p6vt\" (UID: \"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c\") " pod="kube-system/kube-proxy-6p6vt"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.116084     813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e993dc5-3e51-407a-96f0-81c74274fb7c-lib-modules\") pod \"kindnet-zv2p5\" (UID: \"9e993dc5-3e51-407a-96f0-81c74274fb7c\") " pod="kube-system/kindnet-zv2p5"
	Sep 16 10:56:24 multinode-026168 kubelet[813]: I0916 10:56:24.123797     813 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:56:29 multinode-026168 kubelet[813]: E0916 10:56:29.026952     813 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484189026776408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:56:29 multinode-026168 kubelet[813]: E0916 10:56:29.026991     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484189026776408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:56:33 multinode-026168 kubelet[813]: I0916 10:56:33.270744     813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:56:39 multinode-026168 kubelet[813]: E0916 10:56:39.028631     813 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484199028224382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:56:39 multinode-026168 kubelet[813]: E0916 10:56:39.028669     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484199028224382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:56:49 multinode-026168 kubelet[813]: E0916 10:56:49.030865     813 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484209030681198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:56:49 multinode-026168 kubelet[813]: E0916 10:56:49.030902     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484209030681198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:56:55 multinode-026168 kubelet[813]: I0916 10:56:55.098602     813 scope.go:117] "RemoveContainer" containerID="098b269138c39f3dfbf3c0866da6d463a2e72b46f542d71d00583b6839dd0f46"
	Sep 16 10:56:59 multinode-026168 kubelet[813]: E0916 10:56:59.032387     813 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484219032190429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:56:59 multinode-026168 kubelet[813]: E0916 10:56:59.032443     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484219032190429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:57:09 multinode-026168 kubelet[813]: E0916 10:57:09.033576     813 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484229033326580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:57:09 multinode-026168 kubelet[813]: E0916 10:57:09.033618     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484229033326580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:57:19 multinode-026168 kubelet[813]: E0916 10:57:19.034797     813 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484239034557904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:57:19 multinode-026168 kubelet[813]: E0916 10:57:19.034852     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484239034557904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:57:29 multinode-026168 kubelet[813]: E0916 10:57:29.036205     813 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484249036017391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:57:29 multinode-026168 kubelet[813]: E0916 10:57:29.036246     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484249036017391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-026168 -n multinode-026168
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (527.417µs)
helpers_test.go:263: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/DeleteNode (7.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026168 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026168 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.603077056s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:396: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (499.862µs)
multinode_test.go:398: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-026168
helpers_test.go:235: (dbg) docker inspect multinode-026168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74",
	        "Created": "2024-09-16T10:53:21.752929602Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 175536,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:57:58.304921231Z",
	            "FinishedAt": "2024-09-16T10:57:57.463071933Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hostname",
	        "HostsPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/hosts",
	        "LogPath": "/var/lib/docker/containers/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74-json.log",
	        "Name": "/multinode-026168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-026168:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-026168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/32d128093c3024ce44fd3985ba0fa4e33e340c773a649d6605004fd7b43448b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-026168",
	                "Source": "/var/lib/docker/volumes/multinode-026168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-026168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-026168",
	                "name.minikube.sigs.k8s.io": "multinode-026168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a7a2d732530102540e56056bac739ec6409baa1c1444adf08ed31b6b1e6a8ec",
	            "SandboxKey": "/var/run/docker/netns/1a7a2d732530",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32938"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32939"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32942"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32940"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32941"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-026168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a5a173559814a989877e5b7826f3cf7f4df5f065fe1cdcc6350cf486bc64e678",
	                    "EndpointID": "4c020bb4ce9185beb4a3aab48a503e8df51ed45899979e3a6e9c57489476402b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-026168",
	                        "23ba806c0524"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-026168 -n multinode-026168
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 logs -n 25: (1.263136337s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168:/home/docker/cp-test_multinode-026168-m02_multinode-026168.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168 sudo cat                                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m02_multinode-026168.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03:/home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168-m03 sudo cat                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp testdata/cp-test.txt                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168:/home/docker/cp-test_multinode-026168-m03_multinode-026168.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168 sudo cat                                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m03_multinode-026168.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt                       | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m02:/home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n                                                                 | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | multinode-026168-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-026168 ssh -n multinode-026168-m02 sudo cat                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-026168 node stop m03                                                          | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	| node    | multinode-026168 node start                                                             | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-026168                                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC |                     |
	| stop    | -p multinode-026168                                                                     | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:56 UTC |
	| start   | -p multinode-026168                                                                     | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-026168                                                                | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC |                     |
	| node    | multinode-026168 node delete                                                            | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-026168 stop                                                                   | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	| start   | -p multinode-026168                                                                     | multinode-026168 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:58 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=docker                                                                         |                  |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:57:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:57:57.935258  175223 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:57:57.935418  175223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:57:57.935429  175223 out.go:358] Setting ErrFile to fd 2...
	I0916 10:57:57.935436  175223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:57:57.935632  175223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:57:57.936210  175223 out.go:352] Setting JSON to false
	I0916 10:57:57.937212  175223 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2418,"bootTime":1726481860,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:57:57.937315  175223 start.go:139] virtualization: kvm guest
	I0916 10:57:57.939679  175223 out.go:177] * [multinode-026168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:57:57.940967  175223 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:57:57.940967  175223 notify.go:220] Checking for updates...
	I0916 10:57:57.943563  175223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:57:57.944979  175223 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:57:57.946503  175223 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:57:57.947912  175223 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:57:57.949483  175223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:57:57.951488  175223 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:57:57.952210  175223 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:57:57.975447  175223 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:57:57.975558  175223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:57:58.027080  175223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:57:58.017321193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:57:58.027176  175223 docker.go:318] overlay module found
	I0916 10:57:58.029187  175223 out.go:177] * Using the docker driver based on existing profile
	I0916 10:57:58.030350  175223 start.go:297] selected driver: docker
	I0916 10:57:58.030365  175223 start.go:901] validating driver "docker" against &{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-inst
aller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:57:58.030492  175223 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:57:58.030581  175223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:57:58.082010  175223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:57:58.071318797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:57:58.082586  175223 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:57:58.082613  175223 cni.go:84] Creating CNI manager for ""
	I0916 10:57:58.082642  175223 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0916 10:57:58.082687  175223 start.go:340] cluster config:
	{Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:57:58.084752  175223 out.go:177] * Starting "multinode-026168" primary control-plane node in "multinode-026168" cluster
	I0916 10:57:58.086245  175223 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:57:58.087860  175223 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:57:58.089414  175223 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:57:58.089447  175223 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:57:58.089465  175223 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:57:58.089474  175223 cache.go:56] Caching tarball of preloaded images
	I0916 10:57:58.089568  175223 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:57:58.089584  175223 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:57:58.089729  175223 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	W0916 10:57:58.110191  175223 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:57:58.110210  175223 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:57:58.110282  175223 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:57:58.110297  175223 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:57:58.110303  175223 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:57:58.110310  175223 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:57:58.110317  175223 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:57:58.111443  175223 image.go:273] response: 
	I0916 10:57:58.167936  175223 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:57:58.167996  175223 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:57:58.168045  175223 start.go:360] acquireMachinesLock for multinode-026168: {Name:mk1016c8f1a43c2d6030796baf01aa33f86316e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:57:58.168141  175223 start.go:364] duration metric: took 56.32µs to acquireMachinesLock for "multinode-026168"
	I0916 10:57:58.168165  175223 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:57:58.168174  175223 fix.go:54] fixHost starting: 
	I0916 10:57:58.168380  175223 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:57:58.186715  175223 fix.go:112] recreateIfNeeded on multinode-026168: state=Stopped err=<nil>
	W0916 10:57:58.186752  175223 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:57:58.188900  175223 out.go:177] * Restarting existing docker container for "multinode-026168" ...
	I0916 10:57:58.190294  175223 cli_runner.go:164] Run: docker start multinode-026168
	I0916 10:57:58.455500  175223 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:57:58.473977  175223 kic.go:430] container "multinode-026168" state is running.
	I0916 10:57:58.474416  175223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:57:58.493086  175223 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:57:58.493330  175223 machine.go:93] provisionDockerMachine start ...
	I0916 10:57:58.493415  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:57:58.511050  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:57:58.511311  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0916 10:57:58.511329  175223 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:57:58.511988  175223 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55598->127.0.0.1:32938: read: connection reset by peer
	I0916 10:58:01.645029  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:58:01.645068  175223 ubuntu.go:169] provisioning hostname "multinode-026168"
	I0916 10:58:01.645243  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:01.663736  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:01.663919  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0916 10:58:01.663932  175223 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168 && echo "multinode-026168" | sudo tee /etc/hostname
	I0916 10:58:01.808230  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168
	
	I0916 10:58:01.808302  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:01.826466  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:01.826639  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0916 10:58:01.826655  175223 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:58:01.961915  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:58:01.961962  175223 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:58:01.962006  175223 ubuntu.go:177] setting up certificates
	I0916 10:58:01.962018  175223 provision.go:84] configureAuth start
	I0916 10:58:01.962079  175223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:58:01.980090  175223 provision.go:143] copyHostCerts
	I0916 10:58:01.980136  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:58:01.980169  175223 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:58:01.980178  175223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:58:01.980260  175223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:58:01.980358  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:58:01.980381  175223 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:58:01.980390  175223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:58:01.980432  175223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:58:01.980559  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:58:01.980613  175223 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:58:01.980623  175223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:58:01.980666  175223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:58:01.980741  175223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-026168]
	I0916 10:58:02.078409  175223 provision.go:177] copyRemoteCerts
	I0916 10:58:02.078482  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:58:02.078524  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:02.096450  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:58:02.189983  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:58:02.190039  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:58:02.212262  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:58:02.212316  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:58:02.234945  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:58:02.235009  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 10:58:02.257753  175223 provision.go:87] duration metric: took 295.720777ms to configureAuth
	I0916 10:58:02.257781  175223 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:58:02.257974  175223 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:58:02.258061  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:02.275563  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:02.275757  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0916 10:58:02.275775  175223 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:58:02.582600  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:58:02.582625  175223 machine.go:96] duration metric: took 4.089270002s to provisionDockerMachine
	I0916 10:58:02.582637  175223 start.go:293] postStartSetup for "multinode-026168" (driver="docker")
	I0916 10:58:02.582650  175223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:58:02.582724  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:58:02.582766  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:02.600620  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:58:02.694375  175223 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:58:02.697759  175223 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:58:02.697792  175223 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:58:02.697803  175223 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:58:02.697811  175223 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:58:02.697819  175223 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:58:02.697824  175223 command_runner.go:130] > ID=ubuntu
	I0916 10:58:02.697828  175223 command_runner.go:130] > ID_LIKE=debian
	I0916 10:58:02.697832  175223 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:58:02.697844  175223 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:58:02.697855  175223 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:58:02.697865  175223 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:58:02.697871  175223 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:58:02.697932  175223 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:58:02.697966  175223 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:58:02.697988  175223 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:58:02.697999  175223 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:58:02.698015  175223 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:58:02.698084  175223 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:58:02.698190  175223 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:58:02.698202  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:58:02.698306  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:58:02.706400  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:58:02.728383  175223 start.go:296] duration metric: took 145.731671ms for postStartSetup
	I0916 10:58:02.728454  175223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:58:02.728493  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:02.746648  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:58:02.842130  175223 command_runner.go:130] > 30%
	I0916 10:58:02.842326  175223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:58:02.846307  175223 command_runner.go:130] > 204G
	I0916 10:58:02.846513  175223 fix.go:56] duration metric: took 4.678334901s for fixHost
	I0916 10:58:02.846535  175223 start.go:83] releasing machines lock for "multinode-026168", held for 4.678380045s
	I0916 10:58:02.846591  175223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:58:02.863983  175223 ssh_runner.go:195] Run: cat /version.json
	I0916 10:58:02.864035  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:02.864149  175223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:58:02.864223  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:58:02.882602  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:58:02.886678  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:58:02.973068  175223 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:58:02.973202  175223 ssh_runner.go:195] Run: systemctl --version
	I0916 10:58:03.053540  175223 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:58:03.053597  175223 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:58:03.053623  175223 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:58:03.053690  175223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:58:03.191452  175223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:58:03.195548  175223 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf.mk_disabled
	I0916 10:58:03.195607  175223 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:58:03.195618  175223 command_runner.go:130] > Device: 37h/55d	Inode: 535096      Links: 1
	I0916 10:58:03.195628  175223 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:03.195637  175223 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:58:03.195648  175223 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:58:03.195657  175223 command_runner.go:130] > Change: 2024-09-16 10:53:24.206895094 +0000
	I0916 10:58:03.195670  175223 command_runner.go:130] >  Birth: 2024-09-16 10:53:24.202894799 +0000
	I0916 10:58:03.195827  175223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:58:03.204639  175223 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:58:03.204722  175223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:58:03.213190  175223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:58:03.213216  175223 start.go:495] detecting cgroup driver to use...
	I0916 10:58:03.213252  175223 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:03.213306  175223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:58:03.224716  175223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:58:03.235080  175223 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:58:03.235130  175223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:58:03.246679  175223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:58:03.257714  175223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:58:03.331157  175223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:58:03.406195  175223 docker.go:233] disabling docker service ...
	I0916 10:58:03.406296  175223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:58:03.417741  175223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:58:03.428443  175223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:58:03.505695  175223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:58:03.582232  175223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:58:03.592461  175223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:03.606073  175223 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:58:03.606993  175223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:58:03.607046  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:03.616168  175223 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:58:03.616230  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:03.625295  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:03.634336  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:03.643607  175223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:58:03.652117  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:03.661400  175223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:03.670199  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:03.679238  175223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:58:03.686961  175223 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:58:03.687036  175223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:58:03.694742  175223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:03.765893  175223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:58:03.873137  175223 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:58:03.873205  175223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:58:03.876445  175223 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:58:03.876470  175223 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:58:03.876479  175223 command_runner.go:130] > Device: 41h/65d	Inode: 208         Links: 1
	I0916 10:58:03.876486  175223 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:03.876491  175223 command_runner.go:130] > Access: 2024-09-16 10:58:03.859465911 +0000
	I0916 10:58:03.876498  175223 command_runner.go:130] > Modify: 2024-09-16 10:58:03.859465911 +0000
	I0916 10:58:03.876506  175223 command_runner.go:130] > Change: 2024-09-16 10:58:03.859465911 +0000
	I0916 10:58:03.876511  175223 command_runner.go:130] >  Birth: -
	I0916 10:58:03.876530  175223 start.go:563] Will wait 60s for crictl version
	I0916 10:58:03.876575  175223 ssh_runner.go:195] Run: which crictl
	I0916 10:58:03.879442  175223 command_runner.go:130] > /usr/bin/crictl
	I0916 10:58:03.879510  175223 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:58:03.911893  175223 command_runner.go:130] > Version:  0.1.0
	I0916 10:58:03.911922  175223 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:58:03.911927  175223 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:58:03.911932  175223 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:58:03.911948  175223 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:58:03.912015  175223 ssh_runner.go:195] Run: crio --version
	I0916 10:58:03.944438  175223 command_runner.go:130] > crio version 1.24.6
	I0916 10:58:03.944458  175223 command_runner.go:130] > Version:          1.24.6
	I0916 10:58:03.944466  175223 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:58:03.944470  175223 command_runner.go:130] > GitTreeState:     clean
	I0916 10:58:03.944476  175223 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:58:03.944481  175223 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:58:03.944484  175223 command_runner.go:130] > Compiler:         gc
	I0916 10:58:03.944488  175223 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:58:03.944492  175223 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:58:03.944502  175223 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:58:03.944507  175223 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:58:03.944511  175223 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:58:03.946069  175223 ssh_runner.go:195] Run: crio --version
	I0916 10:58:03.979786  175223 command_runner.go:130] > crio version 1.24.6
	I0916 10:58:03.979816  175223 command_runner.go:130] > Version:          1.24.6
	I0916 10:58:03.979831  175223 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:58:03.979838  175223 command_runner.go:130] > GitTreeState:     clean
	I0916 10:58:03.979847  175223 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:58:03.979854  175223 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:58:03.979861  175223 command_runner.go:130] > Compiler:         gc
	I0916 10:58:03.979872  175223 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:58:03.979880  175223 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:58:03.979893  175223 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:58:03.979904  175223 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:58:03.979912  175223 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:58:03.982148  175223 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:58:03.983753  175223 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:58:03.999901  175223 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:58:04.003564  175223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:04.014200  175223 kubeadm.go:883] updating cluster {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:58:04.014374  175223 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:58:04.014417  175223 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:58:04.051813  175223 command_runner.go:130] > {
	I0916 10:58:04.051836  175223 command_runner.go:130] >   "images": [
	I0916 10:58:04.051841  175223 command_runner.go:130] >     {
	I0916 10:58:04.051852  175223 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:58:04.051859  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.051867  175223 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:58:04.051873  175223 command_runner.go:130] >       ],
	I0916 10:58:04.051879  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.051890  175223 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:58:04.051902  175223 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:58:04.051911  175223 command_runner.go:130] >       ],
	I0916 10:58:04.051919  175223 command_runner.go:130] >       "size": "87190579",
	I0916 10:58:04.051929  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.051938  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.051949  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.051957  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.051966  175223 command_runner.go:130] >     },
	I0916 10:58:04.051973  175223 command_runner.go:130] >     {
	I0916 10:58:04.051986  175223 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 10:58:04.051993  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.052005  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 10:58:04.052011  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052021  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.052036  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 10:58:04.052051  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 10:58:04.052059  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052067  175223 command_runner.go:130] >       "size": "1363676",
	I0916 10:58:04.052077  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.052089  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.052097  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.052107  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.052113  175223 command_runner.go:130] >     },
	I0916 10:58:04.052122  175223 command_runner.go:130] >     {
	I0916 10:58:04.052135  175223 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:58:04.052143  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.052152  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:58:04.052161  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052168  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.052183  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:58:04.052198  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:58:04.052207  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052215  175223 command_runner.go:130] >       "size": "31470524",
	I0916 10:58:04.052224  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.052232  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.052241  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.052250  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.052257  175223 command_runner.go:130] >     },
	I0916 10:58:04.052264  175223 command_runner.go:130] >     {
	I0916 10:58:04.052276  175223 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:58:04.052288  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.052300  175223 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:58:04.052308  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052316  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.052332  175223 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:58:04.052352  175223 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:58:04.052361  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052370  175223 command_runner.go:130] >       "size": "63273227",
	I0916 10:58:04.052379  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.052387  175223 command_runner.go:130] >       "username": "nonroot",
	I0916 10:58:04.052395  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.052402  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.052411  175223 command_runner.go:130] >     },
	I0916 10:58:04.052417  175223 command_runner.go:130] >     {
	I0916 10:58:04.052430  175223 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:58:04.052439  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.052448  175223 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:58:04.052463  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052472  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.052484  175223 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:58:04.052499  175223 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:58:04.052507  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052515  175223 command_runner.go:130] >       "size": "149009664",
	I0916 10:58:04.052525  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.052533  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.052541  175223 command_runner.go:130] >       },
	I0916 10:58:04.052548  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.052557  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.052564  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.052572  175223 command_runner.go:130] >     },
	I0916 10:58:04.052578  175223 command_runner.go:130] >     {
	I0916 10:58:04.052591  175223 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:58:04.052601  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.052617  175223 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:58:04.052625  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052633  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.052647  175223 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:58:04.052663  175223 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:58:04.052672  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052680  175223 command_runner.go:130] >       "size": "95237600",
	I0916 10:58:04.052688  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.052696  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.052704  175223 command_runner.go:130] >       },
	I0916 10:58:04.052712  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.052720  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.052728  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.052737  175223 command_runner.go:130] >     },
	I0916 10:58:04.052743  175223 command_runner.go:130] >     {
	I0916 10:58:04.052757  175223 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:58:04.052767  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.052780  175223 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:58:04.052788  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052795  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.052811  175223 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:58:04.052826  175223 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:58:04.052834  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052841  175223 command_runner.go:130] >       "size": "89437508",
	I0916 10:58:04.052850  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.052857  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.052864  175223 command_runner.go:130] >       },
	I0916 10:58:04.052871  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.052880  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.052887  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.052893  175223 command_runner.go:130] >     },
	I0916 10:58:04.052900  175223 command_runner.go:130] >     {
	I0916 10:58:04.052912  175223 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:58:04.052921  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.052930  175223 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:58:04.052939  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052946  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.052970  175223 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:58:04.052985  175223 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:58:04.052990  175223 command_runner.go:130] >       ],
	I0916 10:58:04.052996  175223 command_runner.go:130] >       "size": "92733849",
	I0916 10:58:04.053002  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.053012  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.053019  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.053028  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.053033  175223 command_runner.go:130] >     },
	I0916 10:58:04.053042  175223 command_runner.go:130] >     {
	I0916 10:58:04.053054  175223 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:58:04.053063  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.053074  175223 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:58:04.053085  175223 command_runner.go:130] >       ],
	I0916 10:58:04.053094  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.053107  175223 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:58:04.053122  175223 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:58:04.053130  175223 command_runner.go:130] >       ],
	I0916 10:58:04.053138  175223 command_runner.go:130] >       "size": "68420934",
	I0916 10:58:04.053157  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.053166  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.053173  175223 command_runner.go:130] >       },
	I0916 10:58:04.053181  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.053188  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.053195  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.053203  175223 command_runner.go:130] >     },
	I0916 10:58:04.053210  175223 command_runner.go:130] >     {
	I0916 10:58:04.053222  175223 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:58:04.053231  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.053240  175223 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:58:04.053248  175223 command_runner.go:130] >       ],
	I0916 10:58:04.053258  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.053272  175223 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:58:04.053287  175223 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:58:04.053296  175223 command_runner.go:130] >       ],
	I0916 10:58:04.053304  175223 command_runner.go:130] >       "size": "742080",
	I0916 10:58:04.053312  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.053319  175223 command_runner.go:130] >         "value": "65535"
	I0916 10:58:04.053328  175223 command_runner.go:130] >       },
	I0916 10:58:04.053349  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.053357  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.053367  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.053373  175223 command_runner.go:130] >     }
	I0916 10:58:04.053382  175223 command_runner.go:130] >   ]
	I0916 10:58:04.053388  175223 command_runner.go:130] > }
	I0916 10:58:04.054071  175223 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:58:04.054093  175223 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:58:04.054139  175223 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:58:04.085671  175223 command_runner.go:130] > {
	I0916 10:58:04.085692  175223 command_runner.go:130] >   "images": [
	I0916 10:58:04.085697  175223 command_runner.go:130] >     {
	I0916 10:58:04.085730  175223 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:58:04.085737  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.085743  175223 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:58:04.085746  175223 command_runner.go:130] >       ],
	I0916 10:58:04.085751  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.085761  175223 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:58:04.085771  175223 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:58:04.085777  175223 command_runner.go:130] >       ],
	I0916 10:58:04.085782  175223 command_runner.go:130] >       "size": "87190579",
	I0916 10:58:04.085789  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.085793  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.085803  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.085809  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.085813  175223 command_runner.go:130] >     },
	I0916 10:58:04.085818  175223 command_runner.go:130] >     {
	I0916 10:58:04.085825  175223 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 10:58:04.085846  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.085853  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 10:58:04.085857  175223 command_runner.go:130] >       ],
	I0916 10:58:04.085863  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.085870  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 10:58:04.085879  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 10:58:04.085885  175223 command_runner.go:130] >       ],
	I0916 10:58:04.085891  175223 command_runner.go:130] >       "size": "1363676",
	I0916 10:58:04.085897  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.085903  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.085917  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.085921  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.085924  175223 command_runner.go:130] >     },
	I0916 10:58:04.085927  175223 command_runner.go:130] >     {
	I0916 10:58:04.085933  175223 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:58:04.085937  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.085942  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:58:04.085945  175223 command_runner.go:130] >       ],
	I0916 10:58:04.085949  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.085957  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:58:04.085972  175223 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:58:04.085978  175223 command_runner.go:130] >       ],
	I0916 10:58:04.085982  175223 command_runner.go:130] >       "size": "31470524",
	I0916 10:58:04.085988  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.085992  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.085999  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086003  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086006  175223 command_runner.go:130] >     },
	I0916 10:58:04.086010  175223 command_runner.go:130] >     {
	I0916 10:58:04.086016  175223 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:58:04.086023  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.086028  175223 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:58:04.086034  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086038  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.086047  175223 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:58:04.086060  175223 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:58:04.086066  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086070  175223 command_runner.go:130] >       "size": "63273227",
	I0916 10:58:04.086076  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.086080  175223 command_runner.go:130] >       "username": "nonroot",
	I0916 10:58:04.086087  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086091  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086098  175223 command_runner.go:130] >     },
	I0916 10:58:04.086102  175223 command_runner.go:130] >     {
	I0916 10:58:04.086110  175223 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:58:04.086118  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.086125  175223 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:58:04.086129  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086133  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.086143  175223 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:58:04.086152  175223 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:58:04.086157  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086162  175223 command_runner.go:130] >       "size": "149009664",
	I0916 10:58:04.086168  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.086172  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.086178  175223 command_runner.go:130] >       },
	I0916 10:58:04.086182  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.086188  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086192  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086198  175223 command_runner.go:130] >     },
	I0916 10:58:04.086201  175223 command_runner.go:130] >     {
	I0916 10:58:04.086209  175223 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:58:04.086218  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.086223  175223 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:58:04.086229  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086233  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.086242  175223 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:58:04.086251  175223 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:58:04.086255  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086261  175223 command_runner.go:130] >       "size": "95237600",
	I0916 10:58:04.086265  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.086271  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.086274  175223 command_runner.go:130] >       },
	I0916 10:58:04.086281  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.086285  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086292  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086295  175223 command_runner.go:130] >     },
	I0916 10:58:04.086301  175223 command_runner.go:130] >     {
	I0916 10:58:04.086307  175223 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:58:04.086313  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.086318  175223 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:58:04.086324  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086328  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.086338  175223 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:58:04.086345  175223 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:58:04.086351  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086355  175223 command_runner.go:130] >       "size": "89437508",
	I0916 10:58:04.086361  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.086365  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.086370  175223 command_runner.go:130] >       },
	I0916 10:58:04.086374  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.086382  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086388  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086392  175223 command_runner.go:130] >     },
	I0916 10:58:04.086397  175223 command_runner.go:130] >     {
	I0916 10:58:04.086402  175223 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:58:04.086409  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.086414  175223 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:58:04.086419  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086423  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.086438  175223 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:58:04.086447  175223 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:58:04.086453  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086457  175223 command_runner.go:130] >       "size": "92733849",
	I0916 10:58:04.086463  175223 command_runner.go:130] >       "uid": null,
	I0916 10:58:04.086467  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.086474  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086478  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086481  175223 command_runner.go:130] >     },
	I0916 10:58:04.086488  175223 command_runner.go:130] >     {
	I0916 10:58:04.086494  175223 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:58:04.086500  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.086505  175223 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:58:04.086511  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086516  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.086525  175223 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:58:04.086534  175223 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:58:04.086539  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086543  175223 command_runner.go:130] >       "size": "68420934",
	I0916 10:58:04.086549  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.086553  175223 command_runner.go:130] >         "value": "0"
	I0916 10:58:04.086559  175223 command_runner.go:130] >       },
	I0916 10:58:04.086562  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.086570  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086576  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086579  175223 command_runner.go:130] >     },
	I0916 10:58:04.086585  175223 command_runner.go:130] >     {
	I0916 10:58:04.086591  175223 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:58:04.086597  175223 command_runner.go:130] >       "repoTags": [
	I0916 10:58:04.086602  175223 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:58:04.086608  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086612  175223 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:04.086621  175223 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:58:04.086630  175223 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:58:04.086636  175223 command_runner.go:130] >       ],
	I0916 10:58:04.086640  175223 command_runner.go:130] >       "size": "742080",
	I0916 10:58:04.086647  175223 command_runner.go:130] >       "uid": {
	I0916 10:58:04.086653  175223 command_runner.go:130] >         "value": "65535"
	I0916 10:58:04.086659  175223 command_runner.go:130] >       },
	I0916 10:58:04.086664  175223 command_runner.go:130] >       "username": "",
	I0916 10:58:04.086668  175223 command_runner.go:130] >       "spec": null,
	I0916 10:58:04.086674  175223 command_runner.go:130] >       "pinned": false
	I0916 10:58:04.086677  175223 command_runner.go:130] >     }
	I0916 10:58:04.086682  175223 command_runner.go:130] >   ]
	I0916 10:58:04.086685  175223 command_runner.go:130] > }
	I0916 10:58:04.086793  175223 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:58:04.086804  175223 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:58:04.086810  175223 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 crio true true} ...
	I0916 10:58:04.086896  175223 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:58:04.086961  175223 ssh_runner.go:195] Run: crio config
	I0916 10:58:04.123201  175223 command_runner.go:130] ! time="2024-09-16 10:58:04.122736212Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0916 10:58:04.123229  175223 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:58:04.127639  175223 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:58:04.127663  175223 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:58:04.127669  175223 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:58:04.127673  175223 command_runner.go:130] > #
	I0916 10:58:04.127681  175223 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:58:04.127689  175223 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:58:04.127697  175223 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:58:04.127710  175223 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:58:04.127720  175223 command_runner.go:130] > # reload'.
	I0916 10:58:04.127743  175223 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:58:04.127757  175223 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:58:04.127767  175223 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:58:04.127777  175223 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:58:04.127783  175223 command_runner.go:130] > [crio]
	I0916 10:58:04.127789  175223 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:58:04.127795  175223 command_runner.go:130] > # containers images, in this directory.
	I0916 10:58:04.127802  175223 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0916 10:58:04.127822  175223 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:58:04.127830  175223 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0916 10:58:04.127836  175223 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:58:04.127844  175223 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:58:04.127850  175223 command_runner.go:130] > # storage_driver = "vfs"
	I0916 10:58:04.127858  175223 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:58:04.127866  175223 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:58:04.127871  175223 command_runner.go:130] > # storage_option = [
	I0916 10:58:04.127879  175223 command_runner.go:130] > # ]
	I0916 10:58:04.127885  175223 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:58:04.127893  175223 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:58:04.127898  175223 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:58:04.127905  175223 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:58:04.127911  175223 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:58:04.127918  175223 command_runner.go:130] > # always happen on a node reboot
	I0916 10:58:04.127923  175223 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:58:04.127930  175223 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:58:04.127936  175223 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:58:04.127947  175223 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:58:04.127957  175223 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0916 10:58:04.127966  175223 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:58:04.127974  175223 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:58:04.127981  175223 command_runner.go:130] > # internal_wipe = true
	I0916 10:58:04.127986  175223 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:58:04.127994  175223 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:58:04.128001  175223 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:58:04.128008  175223 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:58:04.128014  175223 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:58:04.128020  175223 command_runner.go:130] > [crio.api]
	I0916 10:58:04.128026  175223 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:58:04.128036  175223 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:58:04.128041  175223 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:58:04.128047  175223 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:58:04.128053  175223 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:58:04.128061  175223 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:58:04.128065  175223 command_runner.go:130] > # stream_port = "0"
	I0916 10:58:04.128071  175223 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:58:04.128080  175223 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:58:04.128088  175223 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:58:04.128095  175223 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:58:04.128100  175223 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:58:04.128108  175223 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:58:04.128115  175223 command_runner.go:130] > # minutes.
	I0916 10:58:04.128118  175223 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:58:04.128127  175223 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:58:04.128135  175223 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:58:04.128139  175223 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:58:04.128149  175223 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:58:04.128157  175223 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:58:04.128164  175223 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:58:04.128168  175223 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:58:04.128177  175223 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:58:04.128184  175223 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0916 10:58:04.128191  175223 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0916 10:58:04.128198  175223 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0916 10:58:04.128212  175223 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:58:04.128220  175223 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:58:04.128225  175223 command_runner.go:130] > [crio.runtime]
	I0916 10:58:04.128234  175223 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:58:04.128242  175223 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:58:04.128249  175223 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:58:04.128255  175223 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:58:04.128261  175223 command_runner.go:130] > # default_ulimits = [
	I0916 10:58:04.128265  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128273  175223 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:58:04.128279  175223 command_runner.go:130] > # no_pivot = false
	I0916 10:58:04.128285  175223 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:58:04.128293  175223 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:58:04.128300  175223 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:58:04.128306  175223 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:58:04.128313  175223 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:58:04.128320  175223 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:58:04.128325  175223 command_runner.go:130] > # conmon = ""
	I0916 10:58:04.128330  175223 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:58:04.128339  175223 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:58:04.128347  175223 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:58:04.128355  175223 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:58:04.128363  175223 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:58:04.128370  175223 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:58:04.128376  175223 command_runner.go:130] > # conmon_env = [
	I0916 10:58:04.128380  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128387  175223 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:58:04.128392  175223 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:58:04.128400  175223 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:58:04.128404  175223 command_runner.go:130] > # default_env = [
	I0916 10:58:04.128409  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128415  175223 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:58:04.128421  175223 command_runner.go:130] > # selinux = false
	I0916 10:58:04.128428  175223 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:58:04.128436  175223 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:58:04.128444  175223 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:58:04.128448  175223 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:58:04.128456  175223 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:58:04.128461  175223 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:58:04.128469  175223 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:58:04.128473  175223 command_runner.go:130] > # which might increase security.
	I0916 10:58:04.128480  175223 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0916 10:58:04.128486  175223 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:58:04.128494  175223 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:58:04.128503  175223 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:58:04.128511  175223 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:58:04.128519  175223 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:58:04.128523  175223 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:58:04.128531  175223 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:58:04.128537  175223 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:58:04.128542  175223 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:58:04.128548  175223 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:58:04.128553  175223 command_runner.go:130] > # irqbalance daemon.
	I0916 10:58:04.128558  175223 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:58:04.128567  175223 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:58:04.128594  175223 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:58:04.128598  175223 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:58:04.128605  175223 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:58:04.128612  175223 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:58:04.128620  175223 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:58:04.128627  175223 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:58:04.128634  175223 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:58:04.128642  175223 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:58:04.128648  175223 command_runner.go:130] > # will be added.
	I0916 10:58:04.128652  175223 command_runner.go:130] > # default_capabilities = [
	I0916 10:58:04.128658  175223 command_runner.go:130] > # 	"CHOWN",
	I0916 10:58:04.128662  175223 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:58:04.128668  175223 command_runner.go:130] > # 	"FSETID",
	I0916 10:58:04.128672  175223 command_runner.go:130] > # 	"FOWNER",
	I0916 10:58:04.128676  175223 command_runner.go:130] > # 	"SETGID",
	I0916 10:58:04.128679  175223 command_runner.go:130] > # 	"SETUID",
	I0916 10:58:04.128683  175223 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:58:04.128689  175223 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:58:04.128695  175223 command_runner.go:130] > # 	"KILL",
	I0916 10:58:04.128700  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128708  175223 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:58:04.128716  175223 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:58:04.128723  175223 command_runner.go:130] > # add_inheritable_capabilities = true
	I0916 10:58:04.128729  175223 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:58:04.128737  175223 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:58:04.128741  175223 command_runner.go:130] > default_sysctls = [
	I0916 10:58:04.128747  175223 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:58:04.128751  175223 command_runner.go:130] > ]
	I0916 10:58:04.128757  175223 command_runner.go:130] > # List of devices on the host that a
	I0916 10:58:04.128763  175223 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:58:04.128770  175223 command_runner.go:130] > # allowed_devices = [
	I0916 10:58:04.128774  175223 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:58:04.128779  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128784  175223 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:58:04.128825  175223 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:58:04.128838  175223 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:58:04.128844  175223 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:58:04.128847  175223 command_runner.go:130] > # additional_devices = [
	I0916 10:58:04.128851  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128856  175223 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:58:04.128859  175223 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:58:04.128865  175223 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:58:04.128869  175223 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:58:04.128874  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128880  175223 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:58:04.128888  175223 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:58:04.128894  175223 command_runner.go:130] > # Defaults to false.
	I0916 10:58:04.128900  175223 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:58:04.128912  175223 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:58:04.128921  175223 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:58:04.128927  175223 command_runner.go:130] > # hooks_dir = [
	I0916 10:58:04.128931  175223 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:58:04.128936  175223 command_runner.go:130] > # ]
	I0916 10:58:04.128942  175223 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:58:04.128948  175223 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:58:04.128955  175223 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:58:04.128959  175223 command_runner.go:130] > #
	I0916 10:58:04.128967  175223 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:58:04.128973  175223 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:58:04.128981  175223 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:58:04.128984  175223 command_runner.go:130] > #
	I0916 10:58:04.128990  175223 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:58:04.128999  175223 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:58:04.129008  175223 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:58:04.129015  175223 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:58:04.129018  175223 command_runner.go:130] > #
	I0916 10:58:04.129023  175223 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:58:04.129030  175223 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:58:04.129037  175223 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:58:04.129043  175223 command_runner.go:130] > # pids_limit = 0
	I0916 10:58:04.129049  175223 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:58:04.129056  175223 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:58:04.129064  175223 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:58:04.129074  175223 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:58:04.129079  175223 command_runner.go:130] > # log_size_max = -1
	I0916 10:58:04.129086  175223 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0916 10:58:04.129093  175223 command_runner.go:130] > # log_to_journald = false
	I0916 10:58:04.129098  175223 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:58:04.129105  175223 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:58:04.129110  175223 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:58:04.129117  175223 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:58:04.129123  175223 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:58:04.129129  175223 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:58:04.129135  175223 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:58:04.129141  175223 command_runner.go:130] > # read_only = false
	I0916 10:58:04.129149  175223 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:58:04.129157  175223 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:58:04.129163  175223 command_runner.go:130] > # live configuration reload.
	I0916 10:58:04.129167  175223 command_runner.go:130] > # log_level = "info"
	I0916 10:58:04.129174  175223 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:58:04.129179  175223 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:58:04.129185  175223 command_runner.go:130] > # log_filter = ""
	I0916 10:58:04.129191  175223 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:58:04.129199  175223 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:58:04.129206  175223 command_runner.go:130] > # separated by comma.
	I0916 10:58:04.129210  175223 command_runner.go:130] > # uid_mappings = ""
	I0916 10:58:04.129217  175223 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:58:04.129224  175223 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:58:04.129230  175223 command_runner.go:130] > # separated by comma.
	I0916 10:58:04.129234  175223 command_runner.go:130] > # gid_mappings = ""
	I0916 10:58:04.129242  175223 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:58:04.129248  175223 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:58:04.129256  175223 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:58:04.129262  175223 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:58:04.129268  175223 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:58:04.129276  175223 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:58:04.129283  175223 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:58:04.129289  175223 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:58:04.129295  175223 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:58:04.129302  175223 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:58:04.129307  175223 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:58:04.129313  175223 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:58:04.129321  175223 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:58:04.129330  175223 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:58:04.129358  175223 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:58:04.129365  175223 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:58:04.129371  175223 command_runner.go:130] > # drop_infra_ctr = true
	I0916 10:58:04.129378  175223 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:58:04.129387  175223 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:58:04.129395  175223 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:58:04.129401  175223 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:58:04.129408  175223 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:58:04.129415  175223 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:58:04.129422  175223 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:58:04.129431  175223 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:58:04.129437  175223 command_runner.go:130] > # pinns_path = ""
	I0916 10:58:04.129443  175223 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:58:04.129451  175223 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0916 10:58:04.129460  175223 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0916 10:58:04.129464  175223 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:58:04.129470  175223 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:58:04.129479  175223 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:58:04.129489  175223 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0916 10:58:04.129496  175223 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:58:04.129504  175223 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:58:04.129512  175223 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:58:04.129516  175223 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:58:04.129522  175223 command_runner.go:130] > # ]
	I0916 10:58:04.129528  175223 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:58:04.129536  175223 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:58:04.129547  175223 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0916 10:58:04.129555  175223 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0916 10:58:04.129561  175223 command_runner.go:130] > #
	I0916 10:58:04.129565  175223 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0916 10:58:04.129575  175223 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0916 10:58:04.129581  175223 command_runner.go:130] > #  runtime_type = "oci"
	I0916 10:58:04.129587  175223 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0916 10:58:04.129594  175223 command_runner.go:130] > #  privileged_without_host_devices = false
	I0916 10:58:04.129598  175223 command_runner.go:130] > #  allowed_annotations = []
	I0916 10:58:04.129601  175223 command_runner.go:130] > # Where:
	I0916 10:58:04.129607  175223 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0916 10:58:04.129615  175223 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0916 10:58:04.129623  175223 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:58:04.129629  175223 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:58:04.129635  175223 command_runner.go:130] > #   in $PATH.
	I0916 10:58:04.129641  175223 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0916 10:58:04.129648  175223 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:58:04.129654  175223 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0916 10:58:04.129660  175223 command_runner.go:130] > #   state.
	I0916 10:58:04.129667  175223 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:58:04.129677  175223 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:58:04.129683  175223 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:58:04.129690  175223 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:58:04.129696  175223 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:58:04.129705  175223 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:58:04.129712  175223 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:58:04.129718  175223 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:58:04.129727  175223 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:58:04.129734  175223 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:58:04.129740  175223 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:58:04.129749  175223 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:58:04.129758  175223 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:58:04.129766  175223 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:58:04.129773  175223 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0916 10:58:04.129780  175223 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:58:04.129784  175223 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:58:04.129789  175223 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0916 10:58:04.129793  175223 command_runner.go:130] > runtime_type = "oci"
	I0916 10:58:04.129799  175223 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:58:04.129804  175223 command_runner.go:130] > runtime_config_path = ""
	I0916 10:58:04.129810  175223 command_runner.go:130] > monitor_path = ""
	I0916 10:58:04.129814  175223 command_runner.go:130] > monitor_cgroup = ""
	I0916 10:58:04.129823  175223 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:58:04.129845  175223 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0916 10:58:04.129851  175223 command_runner.go:130] > # running containers
	I0916 10:58:04.129855  175223 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0916 10:58:04.129861  175223 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0916 10:58:04.129870  175223 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0916 10:58:04.129877  175223 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0916 10:58:04.129883  175223 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0916 10:58:04.129889  175223 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0916 10:58:04.129894  175223 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0916 10:58:04.129900  175223 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0916 10:58:04.129905  175223 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0916 10:58:04.129911  175223 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0916 10:58:04.129918  175223 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:58:04.129927  175223 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:58:04.129935  175223 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:58:04.129942  175223 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:58:04.129953  175223 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:58:04.129961  175223 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:58:04.129972  175223 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:58:04.129982  175223 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:58:04.129990  175223 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:58:04.129999  175223 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:58:04.130004  175223 command_runner.go:130] > # Example:
	I0916 10:58:04.130009  175223 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:58:04.130016  175223 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:58:04.130020  175223 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:58:04.130026  175223 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:58:04.130032  175223 command_runner.go:130] > # cpuset = 0
	I0916 10:58:04.130037  175223 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:58:04.130044  175223 command_runner.go:130] > # Where:
	I0916 10:58:04.130048  175223 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:58:04.130057  175223 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:58:04.130064  175223 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:58:04.130069  175223 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:58:04.130079  175223 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:58:04.130087  175223 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:58:04.130092  175223 command_runner.go:130] > # 
	I0916 10:58:04.130099  175223 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:58:04.130104  175223 command_runner.go:130] > #
	I0916 10:58:04.130109  175223 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:58:04.130115  175223 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:58:04.130123  175223 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:58:04.130138  175223 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:58:04.130145  175223 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:58:04.130149  175223 command_runner.go:130] > [crio.image]
	I0916 10:58:04.130157  175223 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:58:04.130162  175223 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:58:04.130170  175223 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:58:04.130181  175223 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:58:04.130187  175223 command_runner.go:130] > # global_auth_file = ""
	I0916 10:58:04.130192  175223 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:58:04.130199  175223 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:58:04.130204  175223 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:58:04.130213  175223 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:58:04.130221  175223 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:58:04.130227  175223 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:58:04.130233  175223 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:58:04.130239  175223 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:58:04.130247  175223 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:58:04.130253  175223 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:58:04.130261  175223 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:58:04.130265  175223 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:58:04.130273  175223 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:58:04.130279  175223 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:58:04.130287  175223 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:58:04.130296  175223 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:58:04.130303  175223 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:58:04.130307  175223 command_runner.go:130] > # signature_policy = ""
	I0916 10:58:04.130319  175223 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:58:04.130327  175223 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:58:04.130332  175223 command_runner.go:130] > # changing them here.
	I0916 10:58:04.130336  175223 command_runner.go:130] > # insecure_registries = [
	I0916 10:58:04.130341  175223 command_runner.go:130] > # ]
	I0916 10:58:04.130347  175223 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:58:04.130355  175223 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:58:04.130359  175223 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:58:04.130366  175223 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:58:04.130370  175223 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:58:04.130377  175223 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:58:04.130383  175223 command_runner.go:130] > # CNI plugins.
	I0916 10:58:04.130387  175223 command_runner.go:130] > [crio.network]
	I0916 10:58:04.130395  175223 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:58:04.130401  175223 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:58:04.130407  175223 command_runner.go:130] > # cni_default_network = ""
	I0916 10:58:04.130413  175223 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:58:04.130421  175223 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:58:04.130429  175223 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:58:04.130433  175223 command_runner.go:130] > # plugin_dirs = [
	I0916 10:58:04.130438  175223 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:58:04.130441  175223 command_runner.go:130] > # ]
	I0916 10:58:04.130446  175223 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:58:04.130451  175223 command_runner.go:130] > [crio.metrics]
	I0916 10:58:04.130456  175223 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:58:04.130462  175223 command_runner.go:130] > # enable_metrics = false
	I0916 10:58:04.130466  175223 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:58:04.130474  175223 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:58:04.130480  175223 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:58:04.130488  175223 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:58:04.130496  175223 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:58:04.130500  175223 command_runner.go:130] > # metrics_collectors = [
	I0916 10:58:04.130506  175223 command_runner.go:130] > # 	"operations",
	I0916 10:58:04.130510  175223 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:58:04.130517  175223 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:58:04.130521  175223 command_runner.go:130] > # 	"operations_errors",
	I0916 10:58:04.130527  175223 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:58:04.130531  175223 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:58:04.130537  175223 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:58:04.130542  175223 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:58:04.130548  175223 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:58:04.130565  175223 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:58:04.130578  175223 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:58:04.130582  175223 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:58:04.130588  175223 command_runner.go:130] > # 	"containers_oom",
	I0916 10:58:04.130592  175223 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:58:04.130598  175223 command_runner.go:130] > # 	"operations_total",
	I0916 10:58:04.130603  175223 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:58:04.130607  175223 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:58:04.130614  175223 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:58:04.130618  175223 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:58:04.130625  175223 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:58:04.130630  175223 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:58:04.130636  175223 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:58:04.130641  175223 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:58:04.130647  175223 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:58:04.130651  175223 command_runner.go:130] > # ]
	I0916 10:58:04.130658  175223 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:58:04.130662  175223 command_runner.go:130] > # metrics_port = 9090
	I0916 10:58:04.130669  175223 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:58:04.130676  175223 command_runner.go:130] > # metrics_socket = ""
	I0916 10:58:04.130683  175223 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:58:04.130689  175223 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:58:04.130700  175223 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:58:04.130707  175223 command_runner.go:130] > # certificate on any modification event.
	I0916 10:58:04.130711  175223 command_runner.go:130] > # metrics_cert = ""
	I0916 10:58:04.130718  175223 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:58:04.130723  175223 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:58:04.130729  175223 command_runner.go:130] > # metrics_key = ""
	I0916 10:58:04.130735  175223 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:58:04.130740  175223 command_runner.go:130] > [crio.tracing]
	I0916 10:58:04.130746  175223 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:58:04.130752  175223 command_runner.go:130] > # enable_tracing = false
	I0916 10:58:04.130758  175223 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:58:04.130764  175223 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:58:04.130769  175223 command_runner.go:130] > # Number of samples to collect per million spans.
	I0916 10:58:04.130775  175223 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:58:04.130781  175223 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:58:04.130787  175223 command_runner.go:130] > [crio.stats]
	I0916 10:58:04.130792  175223 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:58:04.130799  175223 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:58:04.130803  175223 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:58:04.130868  175223 cni.go:84] Creating CNI manager for ""
	I0916 10:58:04.130877  175223 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0916 10:58:04.130886  175223 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:58:04.130904  175223 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-026168 NodeName:multinode-026168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:58:04.131047  175223 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-026168"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:58:04.131110  175223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:58:04.138737  175223 command_runner.go:130] > kubeadm
	I0916 10:58:04.138763  175223 command_runner.go:130] > kubectl
	I0916 10:58:04.138770  175223 command_runner.go:130] > kubelet
	I0916 10:58:04.139429  175223 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:58:04.139495  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:58:04.147378  175223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0916 10:58:04.163489  175223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:58:04.180483  175223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:58:04.197027  175223 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:58:04.200305  175223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:04.210827  175223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:04.281935  175223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:04.294374  175223 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.2
	I0916 10:58:04.294393  175223 certs.go:194] generating shared ca certs ...
	I0916 10:58:04.294410  175223 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:04.294561  175223 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:58:04.294601  175223 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:58:04.294615  175223 certs.go:256] generating profile certs ...
	I0916 10:58:04.294690  175223 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key
	I0916 10:58:04.294774  175223 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key.d8814b66
	I0916 10:58:04.294820  175223 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key
	I0916 10:58:04.294831  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:58:04.294842  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:58:04.294855  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:58:04.294870  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:58:04.294883  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:58:04.294895  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:58:04.294911  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:58:04.294925  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:58:04.294972  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:58:04.294999  175223 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:58:04.295008  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:58:04.295031  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:58:04.295053  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:58:04.295073  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:58:04.295109  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:58:04.295157  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:04.295174  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:58:04.295187  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:58:04.295704  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:58:04.319339  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:58:04.343126  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:58:04.404430  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:58:04.427776  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:58:04.449714  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:58:04.471765  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:58:04.494807  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:58:04.518184  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:58:04.540485  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:58:04.563118  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:58:04.585610  175223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:58:04.601974  175223 ssh_runner.go:195] Run: openssl version
	I0916 10:58:04.607056  175223 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:58:04.607147  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:58:04.616321  175223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:58:04.620005  175223 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:58:04.620044  175223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:58:04.620081  175223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:58:04.626591  175223 command_runner.go:130] > 3ec20f2e
	I0916 10:58:04.626676  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:58:04.635108  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:58:04.644228  175223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:04.647850  175223 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:04.647888  175223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:04.647932  175223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:04.654165  175223 command_runner.go:130] > b5213941
	I0916 10:58:04.654296  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:58:04.663255  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:58:04.672223  175223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:58:04.675487  175223 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:58:04.675526  175223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:58:04.675575  175223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:58:04.682535  175223 command_runner.go:130] > 51391683
	I0916 10:58:04.682639  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:58:04.691236  175223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:58:04.694523  175223 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:58:04.694551  175223 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:58:04.694560  175223 command_runner.go:130] > Device: 801h/2049d	Inode: 1050903     Links: 1
	I0916 10:58:04.694597  175223 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:04.694610  175223 command_runner.go:130] > Access: 2024-09-16 10:56:19.327776911 +0000
	I0916 10:58:04.694618  175223 command_runner.go:130] > Modify: 2024-09-16 10:53:26.655075181 +0000
	I0916 10:58:04.694634  175223 command_runner.go:130] > Change: 2024-09-16 10:53:26.655075181 +0000
	I0916 10:58:04.694646  175223 command_runner.go:130] >  Birth: 2024-09-16 10:53:26.655075181 +0000
	I0916 10:58:04.694713  175223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:58:04.700906  175223 command_runner.go:130] > Certificate will not expire
	I0916 10:58:04.701130  175223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:58:04.707192  175223 command_runner.go:130] > Certificate will not expire
	I0916 10:58:04.707263  175223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:58:04.713512  175223 command_runner.go:130] > Certificate will not expire
	I0916 10:58:04.713604  175223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:58:04.719790  175223 command_runner.go:130] > Certificate will not expire
	I0916 10:58:04.719951  175223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:58:04.726197  175223 command_runner.go:130] > Certificate will not expire
	I0916 10:58:04.726393  175223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:58:04.732701  175223 command_runner.go:130] > Certificate will not expire
	I0916 10:58:04.732771  175223 kubeadm.go:392] StartCluster: {Name:multinode-026168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:04.732899  175223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:58:04.732977  175223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:58:04.768473  175223 cri.go:89] found id: ""
	I0916 10:58:04.768555  175223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:58:04.776576  175223 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 10:58:04.776606  175223 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 10:58:04.776615  175223 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 10:58:04.776619  175223 command_runner.go:130] > member
	I0916 10:58:04.777269  175223 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:58:04.777289  175223 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:58:04.777352  175223 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:58:04.785568  175223 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:58:04.786023  175223 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-026168" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:58:04.786146  175223 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-026168" cluster setting kubeconfig missing "multinode-026168" context setting]
	I0916 10:58:04.786399  175223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:04.786793  175223 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:58:04.787023  175223 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:58:04.787481  175223 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:58:04.787628  175223 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:58:04.796622  175223 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.67.2
	I0916 10:58:04.796659  175223 kubeadm.go:597] duration metric: took 19.364477ms to restartPrimaryControlPlane
	I0916 10:58:04.796671  175223 kubeadm.go:394] duration metric: took 63.905535ms to StartCluster
	I0916 10:58:04.796691  175223 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:04.796759  175223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:58:04.797281  175223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:04.797515  175223 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:58:04.797596  175223 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:58:04.797733  175223 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:58:04.800931  175223 out.go:177] * Verifying Kubernetes components...
	I0916 10:58:04.800938  175223 out.go:177] * Enabled addons: 
	I0916 10:58:04.802352  175223 addons.go:510] duration metric: took 4.757406ms for enable addons: enabled=[]
	I0916 10:58:04.802406  175223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:04.913755  175223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:04.996327  175223 node_ready.go:35] waiting up to 6m0s for node "multinode-026168" to be "Ready" ...
	I0916 10:58:04.996565  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:04.996581  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:04.996598  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:04.996606  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:04.996917  175223 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:58:04.996950  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:05.496628  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:05.496654  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:05.496674  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:05.496678  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.003884  175223 round_trippers.go:574] Response Status: 200 OK in 2507 milliseconds
	I0916 10:58:08.003930  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.003941  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.003949  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.003954  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:58:08.003957  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:58:08.003963  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.003968  175223 round_trippers.go:580]     Audit-Id: 94135ef4-5ead-464e-bf81-c8cdf5f6968f
	I0916 10:58:08.008155  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:08.009347  175223 node_ready.go:49] node "multinode-026168" has status "Ready":"True"
	I0916 10:58:08.009428  175223 node_ready.go:38] duration metric: took 3.01299278s for node "multinode-026168" to be "Ready" ...
	I0916 10:58:08.009461  175223 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:58:08.009554  175223 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:58:08.009617  175223 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:58:08.009725  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:08.009766  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.009787  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.009802  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.020700  175223 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:58:08.020729  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.020738  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.020744  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:58:08.020749  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:58:08.020754  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.020760  175223 round_trippers.go:580]     Audit-Id: 80660ac8-0483-4f09-acbf-7ea6daadcb13
	I0916 10:58:08.020763  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.095635  175223 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"896"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"798","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90668 chars]
	I0916 10:58:08.102527  175223 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.102757  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:08.102796  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.102816  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.102833  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.106339  175223 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:58:08.106366  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.106375  175223 round_trippers.go:580]     Audit-Id: 32451d5c-9e92-488b-99c3-7ee0d6074ebf
	I0916 10:58:08.106383  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.106388  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.106392  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.106396  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.106400  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.106878  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"798","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0916 10:58:08.107556  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:08.107577  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.107587  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.107593  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.110282  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:08.110339  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.110353  175223 round_trippers.go:580]     Audit-Id: c647a2f9-515b-4b28-9c96-21dacec0e612
	I0916 10:58:08.110358  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.110361  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.110365  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.110368  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.110373  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.110491  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:08.110890  175223 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:08.110913  175223 pod_ready.go:82] duration metric: took 8.289287ms for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.110926  175223 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.111001  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:58:08.111016  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.111026  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.111035  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.112776  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:08.112800  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.112810  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.112816  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.112821  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.112826  175223 round_trippers.go:580]     Audit-Id: 3d170883-2041-426f-ad92-d822d576717e
	I0916 10:58:08.112831  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.112836  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.112954  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"724","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6575 chars]
	I0916 10:58:08.113491  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:08.113512  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.113522  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.113527  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.115998  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:08.116020  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.116030  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.116035  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.116040  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.116044  175223 round_trippers.go:580]     Audit-Id: f00ae94f-2125-40e1-82e4-41375a4d87ed
	I0916 10:58:08.116048  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.116051  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.116231  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"648","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:08.116525  175223 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:08.116541  175223 pod_ready.go:82] duration metric: took 5.608476ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.116556  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.116617  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:58:08.116624  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.116631  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.116637  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.120979  175223 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:58:08.121010  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.121020  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.121025  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.121031  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.121039  175223 round_trippers.go:580]     Audit-Id: f9f4142a-a8ed-4968-a19a-28c36621296c
	I0916 10:58:08.121043  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.121048  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.121226  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"732","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 9107 chars]
	I0916 10:58:08.121861  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:08.121889  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.121900  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.121907  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.123750  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:08.123773  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.123783  175223 round_trippers.go:580]     Audit-Id: eb57b8ab-4e38-4739-9b5f-dda0866b7a33
	I0916 10:58:08.123790  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.123797  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.123817  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.123823  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.123827  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.123955  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:08.124332  175223 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:08.124353  175223 pod_ready.go:82] duration metric: took 7.789749ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.124367  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.124446  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:58:08.124457  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.124467  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.124477  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.126379  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:08.126404  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.126414  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.126419  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.126425  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.126431  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.126436  175223 round_trippers.go:580]     Audit-Id: 78b88bf6-cad3-42e9-b8cd-f3ccdfba27fd
	I0916 10:58:08.126441  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.126655  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"725","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8897 chars]
	I0916 10:58:08.127082  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:08.127095  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.127102  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.127105  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.194238  175223 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0916 10:58:08.194262  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.194269  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.194272  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.194281  175223 round_trippers.go:580]     Audit-Id: a71daf69-5316-4f67-bde2-3a5fd3664cdf
	I0916 10:58:08.194284  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.194289  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.194295  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.194418  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:08.194832  175223 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:08.194856  175223 pod_ready.go:82] duration metric: took 70.476429ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.194883  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.194962  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:58:08.194972  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.194981  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.194986  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.196935  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:08.196957  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.196965  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.196970  175223 round_trippers.go:580]     Audit-Id: 5fe02249-d430-4d76-ba70-cc7f2690c2b2
	I0916 10:58:08.196975  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.196980  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.196985  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.196992  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.197116  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"711","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:58:08.210808  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:08.210833  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.210843  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.210850  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.212653  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:08.212679  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.212689  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.212693  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.212697  175223 round_trippers.go:580]     Audit-Id: 0f3de26b-b401-4baf-9741-c0e191067615
	I0916 10:58:08.212702  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.212707  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.212712  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.212816  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:08.213298  175223 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:08.213319  175223 pod_ready.go:82] duration metric: took 18.428586ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.213349  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.410780  175223 request.go:632] Waited for 197.328148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:58:08.410854  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:58:08.410878  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.410888  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.410896  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.412771  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:08.412793  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.412802  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.412806  175223 round_trippers.go:580]     Audit-Id: f5d5ffac-e73a-4b67-a012-9d508cb39266
	I0916 10:58:08.412813  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.412819  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.412823  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.412828  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.412998  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"871","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:58:08.609813  175223 request.go:632] Waited for 196.281346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:58:08.609874  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:58:08.609880  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.609887  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.609891  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.612119  175223 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:58:08.612149  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.612159  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.612163  175223 round_trippers.go:580]     Audit-Id: 8fc7c445-7a4d-4cd3-9c9e-f57869c63b17
	I0916 10:58:08.612169  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.612173  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.612177  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.612181  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.612185  175223 round_trippers.go:580]     Content-Length: 210
	I0916 10:58:08.612210  175223 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-026168-m03\" not found","reason":"NotFound","details":{"name":"multinode-026168-m03","kind":"nodes"},"code":404}
	I0916 10:58:08.612453  175223 pod_ready.go:98] node "multinode-026168-m03" hosting pod "kube-proxy-g86bs" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-026168-m03": nodes "multinode-026168-m03" not found
	I0916 10:58:08.612497  175223 pod_ready.go:82] duration metric: took 399.137339ms for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	E0916 10:58:08.612509  175223 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-026168-m03" hosting pod "kube-proxy-g86bs" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-026168-m03": nodes "multinode-026168-m03" not found
	I0916 10:58:08.612520  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:08.810513  175223 request.go:632] Waited for 197.910812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:58:08.810647  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:58:08.810669  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:08.810680  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:08.810704  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:08.814052  175223 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:58:08.814108  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:08.814121  175223 round_trippers.go:580]     Audit-Id: 2bd28f2c-dc01-468d-a94f-a3cf71004105
	I0916 10:58:08.814128  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:08.814135  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:08.814140  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:08.814145  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:08.814175  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:08 GMT
	I0916 10:58:08.814331  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qds2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac30bd54-b932-4f52-a53c-4edbc5eefc7c","resourceVersion":"784","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:58:09.010499  175223 request.go:632] Waited for 195.425133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:58:09.010602  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:58:09.010623  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:09.010657  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:09.010674  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:09.012242  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:09.012326  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:09.012347  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:09 GMT
	I0916 10:58:09.012362  175223 round_trippers.go:580]     Audit-Id: 1aefbe38-f542-4cf3-98a2-4898471baa57
	I0916 10:58:09.012433  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:09.012454  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:09.012467  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:09.012483  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:09.013211  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"738","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6052 chars]
	I0916 10:58:09.013816  175223 pod_ready.go:93] pod "kube-proxy-qds2d" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:09.013877  175223 pod_ready.go:82] duration metric: took 401.343823ms for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:09.013902  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:09.210723  175223 request.go:632] Waited for 196.710953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:09.210802  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:09.210814  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:09.210824  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:09.210838  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:09.212767  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:09.212790  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:09.212799  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:09.212804  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:09.212807  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:09.212810  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:09.212814  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:09 GMT
	I0916 10:58:09.212827  175223 round_trippers.go:580]     Audit-Id: 45b761fa-2423-46c5-b07e-386cb9aa4422
	I0916 10:58:09.212957  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:09.409874  175223 request.go:632] Waited for 196.284845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:09.409967  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:09.409977  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:09.409984  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:09.409988  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:09.412105  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:09.412126  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:09.412136  175223 round_trippers.go:580]     Audit-Id: b951832e-91a6-4bf5-8f43-b8cbcf471c35
	I0916 10:58:09.412143  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:09.412149  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:09.412154  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:09.412158  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:09.412163  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:09 GMT
	I0916 10:58:09.412299  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:09.609735  175223 request.go:632] Waited for 95.189121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:09.609797  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:09.609803  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:09.609811  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:09.609815  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:09.612003  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:09.612025  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:09.612031  175223 round_trippers.go:580]     Audit-Id: 0b394aa8-4903-4dfc-9a21-5cb55b904e5d
	I0916 10:58:09.612036  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:09.612044  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:09.612051  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:09.612056  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:09.612063  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:09 GMT
	I0916 10:58:09.612211  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:09.809889  175223 request.go:632] Waited for 197.279994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:09.809980  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:09.809990  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:09.809997  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:09.810003  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:09.812260  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:09.812284  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:09.812293  175223 round_trippers.go:580]     Audit-Id: 3e6ad290-f650-4376-926e-a330f320f887
	I0916 10:58:09.812298  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:09.812303  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:09.812307  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:09.812313  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:09.812324  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:09 GMT
	I0916 10:58:09.812511  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:10.014662  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:10.014691  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:10.014701  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:10.014704  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:10.016873  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:10.016900  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:10.016911  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:10.016917  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:10.016922  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:10.016928  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:10.016934  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:10 GMT
	I0916 10:58:10.016938  175223 round_trippers.go:580]     Audit-Id: 69f32962-7e58-48d9-85d3-2638bf028651
	I0916 10:58:10.017046  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:10.210769  175223 request.go:632] Waited for 193.236712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:10.210827  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:10.210833  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:10.210840  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:10.210843  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:10.212709  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:10.212726  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:10.212734  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:10.212739  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:10.212744  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:10.212750  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:10.212757  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:10 GMT
	I0916 10:58:10.212761  175223 round_trippers.go:580]     Audit-Id: 68fffae1-27f9-4438-bc2e-dd96f58a4b74
	I0916 10:58:10.212885  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:10.514248  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:10.514271  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:10.514279  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:10.514283  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:10.516500  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:10.516522  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:10.516532  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:10 GMT
	I0916 10:58:10.516538  175223 round_trippers.go:580]     Audit-Id: 13eaed19-8975-47b1-8576-01a6279c5a73
	I0916 10:58:10.516544  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:10.516550  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:10.516559  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:10.516563  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:10.516697  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:10.610291  175223 request.go:632] Waited for 93.213855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:10.610365  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:10.610376  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:10.610387  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:10.610397  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:10.612349  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:10.612375  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:10.612382  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:10.612390  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:10.612394  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:10.612398  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:10 GMT
	I0916 10:58:10.612401  175223 round_trippers.go:580]     Audit-Id: c12f7ecc-dced-4457-bacc-a0e81539a3b4
	I0916 10:58:10.612405  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:10.612540  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:11.014799  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:11.014824  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:11.014831  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:11.014836  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:11.017252  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:11.017272  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:11.017279  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:11.017282  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:11.017287  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:11.017290  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:11 GMT
	I0916 10:58:11.017293  175223 round_trippers.go:580]     Audit-Id: 8bf79770-70a5-4aad-ac9b-4124d754df05
	I0916 10:58:11.017296  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:11.017435  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:11.017827  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:11.017841  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:11.017848  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:11.017854  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:11.019677  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:11.019698  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:11.019707  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:11.019714  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:11 GMT
	I0916 10:58:11.019720  175223 round_trippers.go:580]     Audit-Id: 47f53971-1b59-4758-a813-61d40849654f
	I0916 10:58:11.019724  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:11.019729  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:11.019733  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:11.019943  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:11.020238  175223 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:11.514536  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:11.514557  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:11.514565  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:11.514569  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:11.516779  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:11.516805  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:11.516815  175223 round_trippers.go:580]     Audit-Id: 616b84ff-3f53-4e20-a36f-397b131c2ce0
	I0916 10:58:11.516821  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:11.516826  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:11.516829  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:11.516833  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:11.516837  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:11 GMT
	I0916 10:58:11.517396  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:11.518044  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:11.518065  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:11.518075  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:11.518081  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:11.520580  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:11.520606  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:11.520616  175223 round_trippers.go:580]     Audit-Id: 1c066d4d-af41-4ef9-9916-55d547c3f677
	I0916 10:58:11.520622  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:11.520626  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:11.520629  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:11.520636  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:11.520643  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:11 GMT
	I0916 10:58:11.520763  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:12.014373  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:12.014399  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:12.014409  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:12.014414  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:12.016803  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:12.016827  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:12.016837  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:12.016844  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:12 GMT
	I0916 10:58:12.016848  175223 round_trippers.go:580]     Audit-Id: f34e75f3-1ef7-4698-84d3-77bc249fbb0c
	I0916 10:58:12.016852  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:12.016856  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:12.016860  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:12.017000  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:12.017466  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:12.017483  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:12.017493  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:12.017498  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:12.019664  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:12.019687  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:12.019696  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:12.019702  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:12.019706  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:12.019711  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:12 GMT
	I0916 10:58:12.019715  175223 round_trippers.go:580]     Audit-Id: 3a33dc10-cd52-4496-8742-ac42c11a00a5
	I0916 10:58:12.019719  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:12.019862  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:12.514450  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:12.514472  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:12.514480  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:12.514484  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:12.516567  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:12.516588  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:12.516597  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:12.516603  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:12 GMT
	I0916 10:58:12.516608  175223 round_trippers.go:580]     Audit-Id: 9fa97c1e-7ed3-4364-92e9-b40ede8250b3
	I0916 10:58:12.516613  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:12.516617  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:12.516621  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:12.516782  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:12.517156  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:12.517171  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:12.517180  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:12.517186  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:12.519022  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:12.519039  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:12.519048  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:12 GMT
	I0916 10:58:12.519052  175223 round_trippers.go:580]     Audit-Id: 50fd03de-4c24-4c91-b58d-7961863365ac
	I0916 10:58:12.519063  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:12.519071  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:12.519076  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:12.519082  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:12.519219  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:13.015088  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:13.015113  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:13.015125  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:13.015132  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:13.017303  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:13.017356  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:13.017367  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:13.017375  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:13.017380  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:13.017385  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:13.017391  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:13 GMT
	I0916 10:58:13.017394  175223 round_trippers.go:580]     Audit-Id: 9fbb5280-9c30-4481-803a-2c30534caa13
	I0916 10:58:13.017498  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:13.018000  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:13.018017  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:13.018024  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:13.018030  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:13.019905  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:13.019936  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:13.019945  175223 round_trippers.go:580]     Audit-Id: 90d1e284-6f41-4212-9dff-dca3e8a06336
	I0916 10:58:13.019950  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:13.019954  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:13.019959  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:13.019963  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:13.019970  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:13 GMT
	I0916 10:58:13.020071  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:13.020370  175223 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:13.514742  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:13.514765  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:13.514774  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:13.514778  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:13.516982  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:13.517001  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:13.517009  175223 round_trippers.go:580]     Audit-Id: 85b614f1-b7d7-40c5-9d13-0cb077927ac9
	I0916 10:58:13.517014  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:13.517018  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:13.517022  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:13.517026  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:13.517029  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:13 GMT
	I0916 10:58:13.517116  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:13.517498  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:13.517512  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:13.517518  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:13.517521  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:13.519197  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:13.519219  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:13.519227  175223 round_trippers.go:580]     Audit-Id: 93f89352-1b1b-4ff9-8b6f-8d59ef92b762
	I0916 10:58:13.519231  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:13.519233  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:13.519236  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:13.519239  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:13.519249  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:13 GMT
	I0916 10:58:13.519420  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:14.015193  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:14.015217  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:14.015225  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:14.015230  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:14.017567  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:14.017593  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:14.017602  175223 round_trippers.go:580]     Audit-Id: 71e5cc95-a866-4d6e-ab1d-70b178f444c0
	I0916 10:58:14.017607  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:14.017611  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:14.017622  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:14.017626  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:14.017630  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:14 GMT
	I0916 10:58:14.017821  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:14.018206  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:14.018219  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:14.018227  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:14.018231  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:14.020273  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:14.020297  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:14.020308  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:14.020315  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:14.020320  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:14.020325  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:14 GMT
	I0916 10:58:14.020328  175223 round_trippers.go:580]     Audit-Id: d3931af7-4e7b-42d1-b1a6-806350d973fc
	I0916 10:58:14.020332  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:14.020496  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:14.514114  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:14.514138  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:14.514149  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:14.514153  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:14.516381  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:14.516406  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:14.516416  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:14.516421  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:14 GMT
	I0916 10:58:14.516426  175223 round_trippers.go:580]     Audit-Id: 891feb00-22e9-4fd9-abd2-c1531673a331
	I0916 10:58:14.516433  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:14.516439  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:14.516445  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:14.516557  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:14.516950  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:14.516964  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:14.516974  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:14.516982  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:14.518852  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:14.518871  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:14.518879  175223 round_trippers.go:580]     Audit-Id: 4e841786-4d90-4395-86ab-896817281de5
	I0916 10:58:14.518885  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:14.518889  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:14.518894  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:14.518899  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:14.518905  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:14 GMT
	I0916 10:58:14.519042  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:15.015053  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:15.015081  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:15.015089  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:15.015093  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:15.017531  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:15.017570  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:15.017580  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:15.017586  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:15.017596  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:15.017602  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:15 GMT
	I0916 10:58:15.017606  175223 round_trippers.go:580]     Audit-Id: 8806344c-b9bc-40d0-b492-da948f102f8c
	I0916 10:58:15.017610  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:15.017776  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:15.018194  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:15.018210  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:15.018219  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:15.018226  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:15.020267  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:15.020290  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:15.020299  175223 round_trippers.go:580]     Audit-Id: 7de44fa6-7f68-4a06-b7f5-04da1b56fdfe
	I0916 10:58:15.020304  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:15.020309  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:15.020313  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:15.020317  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:15.020321  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:15 GMT
	I0916 10:58:15.020523  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:15.020870  175223 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:15.514628  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:15.514655  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:15.514663  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:15.514667  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:15.517033  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:15.517058  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:15.517066  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:15.517073  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:15.517077  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:15 GMT
	I0916 10:58:15.517081  175223 round_trippers.go:580]     Audit-Id: 3dcde466-e4fe-49b2-ac5b-e0f4221f4efb
	I0916 10:58:15.517085  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:15.517089  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:15.517263  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:15.517698  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:15.517714  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:15.517721  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:15.517725  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:15.519606  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:15.519624  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:15.519632  175223 round_trippers.go:580]     Audit-Id: 97cb93ea-875e-4047-8a3e-a9efe28bfb2c
	I0916 10:58:15.519639  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:15.519644  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:15.519649  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:15.519658  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:15.519662  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:15 GMT
	I0916 10:58:15.519782  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:16.014199  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:16.014221  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:16.014232  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:16.014237  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:16.016548  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:16.016575  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:16.016603  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:16.016607  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:16.016611  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:16 GMT
	I0916 10:58:16.016615  175223 round_trippers.go:580]     Audit-Id: 34193ced-aba9-446c-9970-329cf2136c1d
	I0916 10:58:16.016617  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:16.016620  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:16.016711  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:16.017091  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:16.017105  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:16.017112  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:16.017116  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:16.019153  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:16.019183  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:16.019194  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:16.019199  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:16.019204  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:16.019207  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:16.019213  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:16 GMT
	I0916 10:58:16.019218  175223 round_trippers.go:580]     Audit-Id: ac716a95-5b34-4a60-aad5-a40885aa9c0b
	I0916 10:58:16.019368  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:16.514956  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:16.514987  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:16.515000  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:16.515007  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:16.517369  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:16.517399  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:16.517416  175223 round_trippers.go:580]     Audit-Id: d17b42d4-4b47-4e41-98c5-339dff85f132
	I0916 10:58:16.517422  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:16.517426  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:16.517430  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:16.517434  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:16.517445  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:16 GMT
	I0916 10:58:16.517606  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:16.517975  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:16.517986  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:16.517993  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:16.517996  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:16.519758  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:16.519775  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:16.519779  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:16.519783  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:16 GMT
	I0916 10:58:16.519787  175223 round_trippers.go:580]     Audit-Id: 4a56610b-5481-4737-b154-8b72409c6b57
	I0916 10:58:16.519790  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:16.519794  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:16.519796  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:16.519902  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:17.014181  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:17.014205  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:17.014213  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:17.014218  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:17.016490  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:17.016513  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:17.016519  175223 round_trippers.go:580]     Audit-Id: 70ba229d-7775-4e18-94c6-924fa2966e60
	I0916 10:58:17.016523  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:17.016526  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:17.016529  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:17.016531  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:17.016534  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:17 GMT
	I0916 10:58:17.016704  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:17.017287  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:17.017310  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:17.017323  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:17.017329  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:17.019228  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:17.019249  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:17.019258  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:17 GMT
	I0916 10:58:17.019265  175223 round_trippers.go:580]     Audit-Id: e453b696-84e2-4219-85a6-fb7eeccdf1ea
	I0916 10:58:17.019269  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:17.019273  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:17.019279  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:17.019282  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:17.019384  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:17.515116  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:17.515156  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:17.515167  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:17.515171  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:17.517292  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:17.517312  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:17.517320  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:17.517326  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:17.517349  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:17.517356  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:17 GMT
	I0916 10:58:17.517360  175223 round_trippers.go:580]     Audit-Id: 97beb914-8dde-47cf-940c-5644ddaf9e8e
	I0916 10:58:17.517364  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:17.517478  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:17.517883  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:17.517905  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:17.517912  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:17.517915  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:17.519634  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:17.519650  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:17.519656  175223 round_trippers.go:580]     Audit-Id: 3bc475e1-109f-4ccb-9d7a-38e214b2ea5d
	I0916 10:58:17.519661  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:17.519663  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:17.519667  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:17.519671  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:17.519677  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:17 GMT
	I0916 10:58:17.519833  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:17.520157  175223 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:18.014702  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:18.014724  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:18.014737  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:18.014743  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:18.017223  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:18.017242  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:18.017250  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:18.017257  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:18.017262  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:18.017267  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:18.017271  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:18 GMT
	I0916 10:58:18.017276  175223 round_trippers.go:580]     Audit-Id: b8aa47ca-2c3b-411c-97f1-647b4f3a7e48
	I0916 10:58:18.017435  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:18.017851  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:18.017866  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:18.017874  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:18.017877  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:18.019874  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:18.019893  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:18.019902  175223 round_trippers.go:580]     Audit-Id: 8f85fd6f-6b97-4d65-baf9-f3cada0354f1
	I0916 10:58:18.019907  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:18.019912  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:18.019916  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:18.019921  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:18.019931  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:18 GMT
	I0916 10:58:18.020124  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:18.514856  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:18.514878  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:18.514887  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:18.514891  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:18.517044  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:18.517063  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:18.517069  175223 round_trippers.go:580]     Audit-Id: 7d6bc4a8-7e08-4611-9b6d-67b1159e9a63
	I0916 10:58:18.517073  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:18.517075  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:18.517078  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:18.517080  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:18.517083  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:18 GMT
	I0916 10:58:18.517268  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:18.517673  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:18.517688  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:18.517694  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:18.517699  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:18.519461  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:18.519477  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:18.519483  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:18.519487  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:18.519491  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:18.519494  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:18.519498  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:18 GMT
	I0916 10:58:18.519501  175223 round_trippers.go:580]     Audit-Id: 915de563-5364-4815-98e0-da9793d17d42
	I0916 10:58:18.519678  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:19.014263  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:19.014288  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:19.014299  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:19.014305  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:19.016669  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:19.016688  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:19.016695  175223 round_trippers.go:580]     Audit-Id: 94fa6fd9-7556-4185-ac21-a4ee795606ca
	I0916 10:58:19.016698  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:19.016702  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:19.016709  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:19.016713  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:19.016717  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:19 GMT
	I0916 10:58:19.016860  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:19.017265  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:19.017281  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:19.017291  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:19.017299  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:19.019228  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:19.019251  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:19.019261  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:19.019268  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:19.019272  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:19 GMT
	I0916 10:58:19.019276  175223 round_trippers.go:580]     Audit-Id: 2749065c-9eaa-4b34-8e41-7df2dbebb70a
	I0916 10:58:19.019280  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:19.019283  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:19.019395  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:19.515036  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:19.515076  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:19.515086  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:19.515092  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:19.517500  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:19.517522  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:19.517529  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:19.517533  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:19.517540  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:19 GMT
	I0916 10:58:19.517544  175223 round_trippers.go:580]     Audit-Id: 957b65e4-57ba-44be-bafd-6c7399e8b723
	I0916 10:58:19.517549  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:19.517554  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:19.517689  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:19.518096  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:19.518109  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:19.518116  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:19.518119  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:19.520072  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:19.520099  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:19.520109  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:19.520117  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:19.520124  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:19.520132  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:19 GMT
	I0916 10:58:19.520140  175223 round_trippers.go:580]     Audit-Id: f7ceb3ab-c7e3-458c-8c02-114fe4343260
	I0916 10:58:19.520144  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:19.520254  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:19.520638  175223 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:20.014606  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:20.014634  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:20.014654  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:20.014658  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:20.016960  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:20.016985  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:20.016995  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:20.017000  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:20.017006  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:20.017010  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:20 GMT
	I0916 10:58:20.017015  175223 round_trippers.go:580]     Audit-Id: bcdd608e-fea8-4740-8270-c2f988b3e1b2
	I0916 10:58:20.017020  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:20.017175  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:20.017716  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:20.017736  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:20.017743  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:20.017750  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:20.019656  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:20.019672  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:20.019679  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:20 GMT
	I0916 10:58:20.019682  175223 round_trippers.go:580]     Audit-Id: 29423c7f-c7b7-4ff4-ae62-6256d680108a
	I0916 10:58:20.019684  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:20.019687  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:20.019690  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:20.019694  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:20.019866  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:20.514533  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:20.514559  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:20.514567  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:20.514570  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:20.516838  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:20.516858  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:20.516867  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:20 GMT
	I0916 10:58:20.516873  175223 round_trippers.go:580]     Audit-Id: a09b0180-fc7e-4236-bd28-cb71f1c289f9
	I0916 10:58:20.516878  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:20.516883  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:20.516887  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:20.516891  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:20.516996  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:20.517397  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:20.517411  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:20.517419  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:20.517421  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:20.519422  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:20.519445  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:20.519454  175223 round_trippers.go:580]     Audit-Id: 113015da-1a5c-4464-8100-a3f5d00b6d09
	I0916 10:58:20.519458  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:20.519464  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:20.519469  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:20.519476  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:20.519483  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:20 GMT
	I0916 10:58:20.519649  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:21.014230  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:21.014256  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:21.014265  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:21.014270  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:21.016668  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:21.016691  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:21.016698  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:21.016702  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:21.016705  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:21.016708  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:21.016711  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:21 GMT
	I0916 10:58:21.016714  175223 round_trippers.go:580]     Audit-Id: e27c647a-033d-4516-82c2-f49f47fb7e4e
	I0916 10:58:21.016886  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:21.017272  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:21.017286  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:21.017292  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:21.017296  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:21.019211  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:21.019228  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:21.019233  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:21.019238  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:21.019242  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:21.019246  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:21 GMT
	I0916 10:58:21.019249  175223 round_trippers.go:580]     Audit-Id: 493d456d-7a19-47aa-bd6d-ea6b5866fb27
	I0916 10:58:21.019252  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:21.019390  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:21.515075  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:21.515101  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:21.515111  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:21.515117  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:21.517523  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:21.517551  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:21.517561  175223 round_trippers.go:580]     Audit-Id: b668559a-9f6d-4089-98a7-16c5eae82098
	I0916 10:58:21.517566  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:21.517571  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:21.517574  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:21.517579  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:21.517583  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:21 GMT
	I0916 10:58:21.517747  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:21.518184  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:21.518197  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:21.518204  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:21.518208  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:21.519923  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:21.519942  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:21.519949  175223 round_trippers.go:580]     Audit-Id: a495c7d2-07c3-48b3-90df-1faf823f80a9
	I0916 10:58:21.519952  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:21.519957  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:21.519960  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:21.519964  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:21.519970  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:21 GMT
	I0916 10:58:21.520127  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:22.014605  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:22.014628  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:22.014639  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:22.014643  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:22.016902  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:22.016930  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:22.016937  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:22.016941  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:22.016945  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:22.016948  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:22.016951  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:22 GMT
	I0916 10:58:22.016954  175223 round_trippers.go:580]     Audit-Id: 530a9c59-a5b0-41b2-8b77-8aa7f3947d3f
	I0916 10:58:22.017049  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:22.017445  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:22.017459  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:22.017466  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:22.017472  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:22.019222  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:22.019244  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:22.019251  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:22.019255  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:22 GMT
	I0916 10:58:22.019258  175223 round_trippers.go:580]     Audit-Id: fd42cb9d-39de-497a-aab3-d7de279f21a8
	I0916 10:58:22.019261  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:22.019264  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:22.019267  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:22.019399  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:22.019729  175223 pod_ready.go:103] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:22.515115  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:22.515141  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:22.515149  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:22.515153  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:22.517480  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:22.517507  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:22.517517  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:22.517524  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:22.517528  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:22.517534  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:22 GMT
	I0916 10:58:22.517540  175223 round_trippers.go:580]     Audit-Id: 52ec29f6-9a6c-48cd-9f47-bd43f8c69a0e
	I0916 10:58:22.517546  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:22.517655  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:22.518050  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:22.518065  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:22.518072  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:22.518075  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:22.520008  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:22.520032  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:22.520042  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:22.520048  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:22.520052  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:22.520056  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:22 GMT
	I0916 10:58:22.520061  175223 round_trippers.go:580]     Audit-Id: b04d1428-f014-4ab1-8817-59878faefd0e
	I0916 10:58:22.520065  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:22.520178  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:23.014980  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:23.015012  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:23.015019  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:23.015023  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:23.017162  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:23.017181  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:23.017187  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:23 GMT
	I0916 10:58:23.017192  175223 round_trippers.go:580]     Audit-Id: 3b162513-7bf1-4e6e-a1ce-f648caa93ab5
	I0916 10:58:23.017195  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:23.017198  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:23.017203  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:23.017207  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:23.017300  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:23.017733  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:23.017752  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:23.017759  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:23.017763  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:23.019517  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:23.019534  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:23.019540  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:23.019544  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:23.019547  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:23 GMT
	I0916 10:58:23.019550  175223 round_trippers.go:580]     Audit-Id: 6001eb9d-0fd9-452b-bc1f-4517e3386070
	I0916 10:58:23.019553  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:23.019555  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:23.019751  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:23.514302  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:23.514325  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:23.514333  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:23.514338  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:23.516550  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:23.516571  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:23.516576  175223 round_trippers.go:580]     Audit-Id: 63aeb5ce-d776-49ab-ae80-d06e9041c106
	I0916 10:58:23.516581  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:23.516584  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:23.516587  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:23.516592  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:23.516597  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:23 GMT
	I0916 10:58:23.516898  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"941","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5345 chars]
	I0916 10:58:23.517294  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:23.517306  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:23.517313  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:23.517317  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:23.519072  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:23.519092  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:23.519100  175223 round_trippers.go:580]     Audit-Id: 23449c27-cc24-4a0f-87cd-07e0324a11d4
	I0916 10:58:23.519105  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:23.519110  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:23.519114  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:23.519118  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:23.519122  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:23 GMT
	I0916 10:58:23.519239  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:24.014602  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:24.014626  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:24.014635  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:24.014640  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:24.016831  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:24.016859  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:24.016868  175223 round_trippers.go:580]     Audit-Id: 443aa5c3-7583-4331-ad2d-2a36b0cde400
	I0916 10:58:24.016873  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:24.016877  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:24.016882  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:24.016887  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:24.016891  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:24 GMT
	I0916 10:58:24.017022  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"988","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5101 chars]
	I0916 10:58:24.017493  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:24.017510  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:24.017518  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:24.017521  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:24.019341  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:24.019361  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:24.019370  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:24.019377  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:24 GMT
	I0916 10:58:24.019389  175223 round_trippers.go:580]     Audit-Id: e771be22-234b-4687-8be1-bae2fe649222
	I0916 10:58:24.019400  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:24.019405  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:24.019413  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:24.019535  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:24.019874  175223 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:24.019892  175223 pod_ready.go:82] duration metric: took 15.005950433s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:24.019902  175223 pod_ready.go:39] duration metric: took 16.010398996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:58:24.019921  175223 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:58:24.019970  175223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:58:24.029980  175223 command_runner.go:130] > 1017
	I0916 10:58:24.030861  175223 api_server.go:72] duration metric: took 19.233314858s to wait for apiserver process to appear ...
	I0916 10:58:24.030886  175223 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:58:24.030922  175223 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:58:24.034430  175223 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:58:24.034495  175223 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 10:58:24.034505  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:24.034513  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:24.034517  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:24.035362  175223 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:58:24.035383  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:24.035390  175223 round_trippers.go:580]     Content-Length: 263
	I0916 10:58:24.035392  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:24 GMT
	I0916 10:58:24.035396  175223 round_trippers.go:580]     Audit-Id: ade7467f-ed92-418b-bd7f-9bb2a3678f71
	I0916 10:58:24.035400  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:24.035403  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:24.035407  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:24.035410  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:24.035430  175223 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:58:24.035504  175223 api_server.go:141] control plane version: v1.31.1
	I0916 10:58:24.035519  175223 api_server.go:131] duration metric: took 4.62728ms to wait for apiserver health ...
	I0916 10:58:24.035529  175223 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:58:24.035603  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:24.035610  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:24.035616  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:24.035619  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:24.038377  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:24.038440  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:24.038455  175223 round_trippers.go:580]     Audit-Id: 18f46f89-7ac2-435b-b97c-573f40f803f4
	I0916 10:58:24.038466  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:24.038472  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:24.038477  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:24.038481  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:24.038485  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:24 GMT
	I0916 10:58:24.039167  175223 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"988"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90955 chars]
	I0916 10:58:24.043010  175223 system_pods.go:59] 12 kube-system pods found
	I0916 10:58:24.043040  175223 system_pods.go:61] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:58:24.043046  175223 system_pods.go:61] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running
	I0916 10:58:24.043056  175223 system_pods.go:61] "kindnet-2jtzj" [530fad1f-573c-4186-b57e-287f820fc065] Running
	I0916 10:58:24.043060  175223 system_pods.go:61] "kindnet-mckv5" [33f14b42-6960-4bd0-b467-60342a55aff6] Running
	I0916 10:58:24.043065  175223 system_pods.go:61] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:58:24.043069  175223 system_pods.go:61] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running
	I0916 10:58:24.043077  175223 system_pods.go:61] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:58:24.043083  175223 system_pods.go:61] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:58:24.043088  175223 system_pods.go:61] "kube-proxy-g86bs" [efc5e34d-fd17-408e-ad74-cd36ded784b3] Running
	I0916 10:58:24.043093  175223 system_pods.go:61] "kube-proxy-qds2d" [ac30bd54-b932-4f52-a53c-4edbc5eefc7c] Running
	I0916 10:58:24.043096  175223 system_pods.go:61] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:58:24.043100  175223 system_pods.go:61] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:58:24.043105  175223 system_pods.go:74] duration metric: took 7.568602ms to wait for pod list to return data ...
	I0916 10:58:24.043115  175223 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:58:24.043172  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:58:24.043179  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:24.043186  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:24.043190  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:24.045277  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:24.045295  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:24.045304  175223 round_trippers.go:580]     Audit-Id: f2be0a14-d849-447f-9efa-c3aa20c9e8fd
	I0916 10:58:24.045312  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:24.045318  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:24.045323  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:24.045346  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:24.045352  175223 round_trippers.go:580]     Content-Length: 261
	I0916 10:58:24.045361  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:24 GMT
	I0916 10:58:24.045378  175223 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"988"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3f54840f-e917-4b73-aac8-060ce8f211be","resourceVersion":"325","creationTimestamp":"2024-09-16T10:53:39Z"}}]}
	I0916 10:58:24.045523  175223 default_sa.go:45] found service account: "default"
	I0916 10:58:24.045535  175223 default_sa.go:55] duration metric: took 2.416068ms for default service account to be created ...
	I0916 10:58:24.045542  175223 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:58:24.045587  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:24.045594  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:24.045601  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:24.045609  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:24.047751  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:24.047771  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:24.047780  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:24.047787  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:24.047793  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:24.047798  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:24.047801  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:24 GMT
	I0916 10:58:24.047804  175223 round_trippers.go:580]     Audit-Id: b71a3aca-dbce-4e16-9cd3-9ed5dcece9a0
	I0916 10:58:24.048404  175223 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"988"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90955 chars]
	I0916 10:58:24.051093  175223 system_pods.go:86] 12 kube-system pods found
	I0916 10:58:24.051119  175223 system_pods.go:89] "coredns-7c65d6cfc9-s82cx" [85130138-c50d-47a8-8bbe-de91bb9a0472] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:58:24.051126  175223 system_pods.go:89] "etcd-multinode-026168" [7221a4cc-7e2d-41a3-b83b-579646af2de2] Running
	I0916 10:58:24.051132  175223 system_pods.go:89] "kindnet-2jtzj" [530fad1f-573c-4186-b57e-287f820fc065] Running
	I0916 10:58:24.051136  175223 system_pods.go:89] "kindnet-mckv5" [33f14b42-6960-4bd0-b467-60342a55aff6] Running
	I0916 10:58:24.051139  175223 system_pods.go:89] "kindnet-zv2p5" [9e993dc5-3e51-407a-96f0-81c74274fb7c] Running
	I0916 10:58:24.051144  175223 system_pods.go:89] "kube-apiserver-multinode-026168" [e0a10f33-efc2-4f2d-b46c-bdb68cf664ce] Running
	I0916 10:58:24.051152  175223 system_pods.go:89] "kube-controller-manager-multinode-026168" [c0b53919-27a0-4a54-ba15-a530a06dbf0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:58:24.051162  175223 system_pods.go:89] "kube-proxy-6p6vt" [42162ba1-cb61-4a95-acc5-5c4c5f3ead8c] Running
	I0916 10:58:24.051170  175223 system_pods.go:89] "kube-proxy-g86bs" [efc5e34d-fd17-408e-ad74-cd36ded784b3] Running
	I0916 10:58:24.051174  175223 system_pods.go:89] "kube-proxy-qds2d" [ac30bd54-b932-4f52-a53c-4edbc5eefc7c] Running
	I0916 10:58:24.051180  175223 system_pods.go:89] "kube-scheduler-multinode-026168" [b293178b-0aac-457b-b950-71fdd2c8fa80] Running
	I0916 10:58:24.051183  175223 system_pods.go:89] "storage-provisioner" [ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7] Running
	I0916 10:58:24.051191  175223 system_pods.go:126] duration metric: took 5.644255ms to wait for k8s-apps to be running ...
	I0916 10:58:24.051199  175223 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:58:24.051238  175223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:58:24.062063  175223 system_svc.go:56] duration metric: took 10.854324ms WaitForService to wait for kubelet
	I0916 10:58:24.062094  175223 kubeadm.go:582] duration metric: took 19.264551629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:58:24.062110  175223 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:58:24.062178  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:58:24.062186  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:24.062193  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:24.062197  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:24.065007  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:24.065031  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:24.065041  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:24 GMT
	I0916 10:58:24.065048  175223 round_trippers.go:580]     Audit-Id: ee88eece-b90d-4a88-8dd1-3582de3a18ff
	I0916 10:58:24.065052  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:24.065056  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:24.065060  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:24.065065  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:24.065286  175223 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"988"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13361 chars]
	I0916 10:58:24.065866  175223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:58:24.065890  175223 node_conditions.go:123] node cpu capacity is 8
	I0916 10:58:24.065903  175223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:58:24.065906  175223 node_conditions.go:123] node cpu capacity is 8
	I0916 10:58:24.065912  175223 node_conditions.go:105] duration metric: took 3.798539ms to run NodePressure ...
	I0916 10:58:24.065922  175223 start.go:241] waiting for startup goroutines ...
	I0916 10:58:24.065932  175223 start.go:246] waiting for cluster config update ...
	I0916 10:58:24.065939  175223 start.go:255] writing updated cluster config ...
	I0916 10:58:24.068667  175223 out.go:201] 
	I0916 10:58:24.070719  175223 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:58:24.070810  175223 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:58:24.072661  175223 out.go:177] * Starting "multinode-026168-m02" worker node in "multinode-026168" cluster
	I0916 10:58:24.074072  175223 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:58:24.075460  175223 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:58:24.076653  175223 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:58:24.076680  175223 cache.go:56] Caching tarball of preloaded images
	I0916 10:58:24.076697  175223 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:58:24.076775  175223 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:58:24.076786  175223 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:58:24.076880  175223 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	W0916 10:58:24.096460  175223 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:58:24.096477  175223 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:58:24.096566  175223 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:58:24.096582  175223 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:58:24.096586  175223 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:58:24.096596  175223 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:58:24.096602  175223 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:58:24.097815  175223 image.go:273] response: 
	I0916 10:58:24.155365  175223 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:58:24.155400  175223 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:58:24.155431  175223 start.go:360] acquireMachinesLock for multinode-026168-m02: {Name:mk244ea9c32e56587b67dd9c9f2d4f0dcccd26e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:58:24.155497  175223 start.go:364] duration metric: took 48.931µs to acquireMachinesLock for "multinode-026168-m02"
	I0916 10:58:24.155516  175223 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:58:24.155522  175223 fix.go:54] fixHost starting: m02
	I0916 10:58:24.155734  175223 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:58:24.172530  175223 fix.go:112] recreateIfNeeded on multinode-026168-m02: state=Stopped err=<nil>
	W0916 10:58:24.172563  175223 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:58:24.175025  175223 out.go:177] * Restarting existing docker container for "multinode-026168-m02" ...
	I0916 10:58:24.176494  175223 cli_runner.go:164] Run: docker start multinode-026168-m02
	I0916 10:58:24.460946  175223 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:58:24.479972  175223 kic.go:430] container "multinode-026168-m02" state is running.
	I0916 10:58:24.480445  175223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:58:24.498818  175223 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/config.json ...
	I0916 10:58:24.499061  175223 machine.go:93] provisionDockerMachine start ...
	I0916 10:58:24.499138  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:24.518298  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:24.518491  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0916 10:58:24.518503  175223 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:58:24.519186  175223 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52118->127.0.0.1:32943: read: connection reset by peer
	I0916 10:58:27.653028  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:58:27.653059  175223 ubuntu.go:169] provisioning hostname "multinode-026168-m02"
	I0916 10:58:27.653123  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:27.671065  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:27.671256  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0916 10:58:27.671270  175223 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-026168-m02 && echo "multinode-026168-m02" | sudo tee /etc/hostname
	I0916 10:58:27.812759  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-026168-m02
	
	I0916 10:58:27.812850  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:27.830777  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:27.830961  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0916 10:58:27.830978  175223 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-026168-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-026168-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-026168-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:58:27.961427  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:58:27.961462  175223 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 10:58:27.961482  175223 ubuntu.go:177] setting up certificates
	I0916 10:58:27.961492  175223 provision.go:84] configureAuth start
	I0916 10:58:27.961555  175223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:58:27.978692  175223 provision.go:143] copyHostCerts
	I0916 10:58:27.978731  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:58:27.978773  175223 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 10:58:27.978783  175223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 10:58:27.978858  175223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 10:58:27.978941  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:58:27.978959  175223 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 10:58:27.978964  175223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 10:58:27.978992  175223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 10:58:27.979039  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:58:27.979057  175223 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 10:58:27.979063  175223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 10:58:27.979084  175223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 10:58:27.979148  175223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.multinode-026168-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-026168-m02]
	I0916 10:58:28.215208  175223 provision.go:177] copyRemoteCerts
	I0916 10:58:28.215266  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:58:28.215300  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:28.233566  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:58:28.330173  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:58:28.330229  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:58:28.352994  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:58:28.353133  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:58:28.374935  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:58:28.375002  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:58:28.397757  175223 provision.go:87] duration metric: took 436.253538ms to configureAuth
	I0916 10:58:28.397786  175223 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:58:28.398006  175223 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:58:28.398097  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:28.415607  175223 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:28.415810  175223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0916 10:58:28.415835  175223 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:58:28.667883  175223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:58:28.667916  175223 machine.go:96] duration metric: took 4.168839766s to provisionDockerMachine
	I0916 10:58:28.667931  175223 start.go:293] postStartSetup for "multinode-026168-m02" (driver="docker")
	I0916 10:58:28.667945  175223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:58:28.668025  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:58:28.668082  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:28.685948  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:58:28.782121  175223 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:58:28.785355  175223 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:58:28.785381  175223 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:58:28.785390  175223 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:58:28.785398  175223 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:58:28.785406  175223 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:58:28.785409  175223 command_runner.go:130] > ID=ubuntu
	I0916 10:58:28.785413  175223 command_runner.go:130] > ID_LIKE=debian
	I0916 10:58:28.785418  175223 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:58:28.785422  175223 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:58:28.785441  175223 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:58:28.785450  175223 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:58:28.785456  175223 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:58:28.785501  175223 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:58:28.785522  175223 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:58:28.785529  175223 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:58:28.785537  175223 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:58:28.785547  175223 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 10:58:28.785630  175223 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 10:58:28.785703  175223 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 10:58:28.785712  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /etc/ssl/certs/112082.pem
	I0916 10:58:28.785797  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:58:28.794008  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:58:28.816977  175223 start.go:296] duration metric: took 149.029837ms for postStartSetup
	I0916 10:58:28.817058  175223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:58:28.817098  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:28.835120  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:58:28.926228  175223 command_runner.go:130] > 30%
	I0916 10:58:28.926365  175223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:58:28.930477  175223 command_runner.go:130] > 204G
	I0916 10:58:28.930606  175223 fix.go:56] duration metric: took 4.775079847s for fixHost
	I0916 10:58:28.930638  175223 start.go:83] releasing machines lock for "multinode-026168-m02", held for 4.775130269s
	I0916 10:58:28.930717  175223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:58:28.950051  175223 out.go:177] * Found network options:
	I0916 10:58:28.951809  175223 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 10:58:28.953408  175223 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:58:28.953448  175223 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:58:28.953522  175223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:58:28.953563  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:28.953606  175223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:58:28.953679  175223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:58:28.971750  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:58:28.972869  175223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:58:29.197522  175223 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:58:29.197619  175223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:58:29.201941  175223 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf.mk_disabled
	I0916 10:58:29.201967  175223 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:58:29.201981  175223 command_runner.go:130] > Device: c7h/199d	Inode: 535096      Links: 1
	I0916 10:58:29.201993  175223 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:29.202005  175223 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:58:29.202016  175223 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:58:29.202028  175223 command_runner.go:130] > Change: 2024-09-16 10:54:33.479990793 +0000
	I0916 10:58:29.202039  175223 command_runner.go:130] >  Birth: 2024-09-16 10:54:33.479990793 +0000
	I0916 10:58:29.202104  175223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:58:29.210252  175223 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:58:29.210332  175223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:58:29.218646  175223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:58:29.218681  175223 start.go:495] detecting cgroup driver to use...
	I0916 10:58:29.218712  175223 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:29.218757  175223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:58:29.230061  175223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:58:29.240594  175223 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:58:29.240652  175223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:58:29.252505  175223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:58:29.263581  175223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:58:29.345709  175223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:58:29.420092  175223 docker.go:233] disabling docker service ...
	I0916 10:58:29.420187  175223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:58:29.432584  175223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:58:29.443950  175223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:58:29.519936  175223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:58:29.595331  175223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:58:29.605689  175223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:29.619881  175223 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:58:29.620899  175223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:58:29.620958  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:29.631115  175223 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:58:29.631174  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:29.640621  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:29.650208  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:29.659682  175223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:58:29.668732  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:29.678153  175223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:29.687057  175223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:58:29.696861  175223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:58:29.704239  175223 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:58:29.704917  175223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:58:29.713137  175223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:29.787861  175223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:58:29.891891  175223 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:58:29.891958  175223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:58:29.895412  175223 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:58:29.895432  175223 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:58:29.895439  175223 command_runner.go:130] > Device: d0h/208d	Inode: 190         Links: 1
	I0916 10:58:29.895449  175223 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:29.895457  175223 command_runner.go:130] > Access: 2024-09-16 10:58:29.877379694 +0000
	I0916 10:58:29.895468  175223 command_runner.go:130] > Modify: 2024-09-16 10:58:29.877379694 +0000
	I0916 10:58:29.895476  175223 command_runner.go:130] > Change: 2024-09-16 10:58:29.877379694 +0000
	I0916 10:58:29.895483  175223 command_runner.go:130] >  Birth: -
	I0916 10:58:29.895535  175223 start.go:563] Will wait 60s for crictl version
	I0916 10:58:29.895582  175223 ssh_runner.go:195] Run: which crictl
	I0916 10:58:29.898876  175223 command_runner.go:130] > /usr/bin/crictl
	I0916 10:58:29.898956  175223 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:58:29.933197  175223 command_runner.go:130] > Version:  0.1.0
	I0916 10:58:29.933222  175223 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:58:29.933226  175223 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0916 10:58:29.933232  175223 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:58:29.933249  175223 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 10:58:29.933303  175223 ssh_runner.go:195] Run: crio --version
	I0916 10:58:29.967862  175223 command_runner.go:130] > crio version 1.24.6
	I0916 10:58:29.967893  175223 command_runner.go:130] > Version:          1.24.6
	I0916 10:58:29.967905  175223 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:58:29.967913  175223 command_runner.go:130] > GitTreeState:     clean
	I0916 10:58:29.967921  175223 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:58:29.967928  175223 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:58:29.967932  175223 command_runner.go:130] > Compiler:         gc
	I0916 10:58:29.967937  175223 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:58:29.967942  175223 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:58:29.967951  175223 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:58:29.967958  175223 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:58:29.967965  175223 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:58:29.968029  175223 ssh_runner.go:195] Run: crio --version
	I0916 10:58:30.001898  175223 command_runner.go:130] > crio version 1.24.6
	I0916 10:58:30.001920  175223 command_runner.go:130] > Version:          1.24.6
	I0916 10:58:30.001931  175223 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0916 10:58:30.001937  175223 command_runner.go:130] > GitTreeState:     clean
	I0916 10:58:30.001944  175223 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0916 10:58:30.001952  175223 command_runner.go:130] > GoVersion:        go1.18.2
	I0916 10:58:30.001957  175223 command_runner.go:130] > Compiler:         gc
	I0916 10:58:30.001964  175223 command_runner.go:130] > Platform:         linux/amd64
	I0916 10:58:30.001972  175223 command_runner.go:130] > Linkmode:         dynamic
	I0916 10:58:30.001988  175223 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0916 10:58:30.001998  175223 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:58:30.002008  175223 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:58:30.004475  175223 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 10:58:30.006601  175223 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:58:30.008385  175223 cli_runner.go:164] Run: docker network inspect multinode-026168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:58:30.025671  175223 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:58:30.029487  175223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:30.039875  175223 mustload.go:65] Loading cluster: multinode-026168
	I0916 10:58:30.040129  175223 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:58:30.040389  175223 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:58:30.057560  175223 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:58:30.057822  175223 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168 for IP: 192.168.67.3
	I0916 10:58:30.057835  175223 certs.go:194] generating shared ca certs ...
	I0916 10:58:30.057855  175223 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:30.057993  175223 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 10:58:30.058046  175223 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 10:58:30.058064  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:58:30.058085  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:58:30.058104  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:58:30.058128  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:58:30.058194  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 10:58:30.058236  175223 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 10:58:30.058250  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:58:30.058286  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:58:30.058317  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:58:30.058349  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 10:58:30.058404  175223 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 10:58:30.058442  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:30.058462  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem -> /usr/share/ca-certificates/11208.pem
	I0916 10:58:30.058481  175223 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> /usr/share/ca-certificates/112082.pem
	I0916 10:58:30.058510  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:58:30.081548  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:58:30.104775  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:58:30.128095  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:58:30.151296  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:58:30.173512  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 10:58:30.197862  175223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 10:58:30.220606  175223 ssh_runner.go:195] Run: openssl version
	I0916 10:58:30.225834  175223 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:58:30.225906  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:58:30.234676  175223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:30.238064  175223 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:30.238096  175223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:30.238144  175223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:30.244346  175223 command_runner.go:130] > b5213941
	I0916 10:58:30.244625  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:58:30.252720  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 10:58:30.261215  175223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 10:58:30.264425  175223 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:58:30.264462  175223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 10:58:30.264517  175223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 10:58:30.270927  175223 command_runner.go:130] > 51391683
	I0916 10:58:30.271000  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 10:58:30.279290  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 10:58:30.288124  175223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 10:58:30.291387  175223 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:58:30.291412  175223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 10:58:30.291444  175223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 10:58:30.297832  175223 command_runner.go:130] > 3ec20f2e
	I0916 10:58:30.297914  175223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:58:30.306534  175223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:58:30.309898  175223 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:58:30.309935  175223 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:58:30.309998  175223 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 crio false true} ...
	I0916 10:58:30.310113  175223 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=multinode-026168-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-026168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:58:30.310172  175223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:58:30.318749  175223 command_runner.go:130] > kubeadm
	I0916 10:58:30.318768  175223 command_runner.go:130] > kubectl
	I0916 10:58:30.318772  175223 command_runner.go:130] > kubelet
	I0916 10:58:30.318794  175223 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:58:30.318843  175223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:58:30.326673  175223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I0916 10:58:30.344189  175223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:58:30.361769  175223 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:58:30.365064  175223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:30.375324  175223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:30.452716  175223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:30.463986  175223 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 10:58:30.464264  175223 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:58:30.466559  175223 out.go:177] * Verifying Kubernetes components...
	I0916 10:58:30.468341  175223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:30.549610  175223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:30.561010  175223 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:58:30.561230  175223 kapi.go:59] client config for multinode-026168: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/multinode-026168/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:58:30.561520  175223 node_ready.go:35] waiting up to 6m0s for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:58:30.561605  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:58:30.561614  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:30.561621  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:30.561627  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:30.563935  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:30.563957  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:30.563964  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:30.563967  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:30.563970  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:30 GMT
	I0916 10:58:30.563973  175223 round_trippers.go:580]     Audit-Id: b8cffe78-8692-4e75-940f-2dbf76a78d5e
	I0916 10:58:30.563975  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:30.563978  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:30.564119  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"738","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 6052 chars]
	I0916 10:58:30.564418  175223 node_ready.go:49] node "multinode-026168-m02" has status "Ready":"True"
	I0916 10:58:30.564432  175223 node_ready.go:38] duration metric: took 2.89329ms for node "multinode-026168-m02" to be "Ready" ...
	I0916 10:58:30.564442  175223 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:58:30.564499  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:30.564507  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:30.564513  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:30.564517  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:30.567421  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:30.567437  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:30.567443  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:30.567447  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:30 GMT
	I0916 10:58:30.567450  175223 round_trippers.go:580]     Audit-Id: 45fcc4cb-09a1-4927-8b16-9f3f2ca494e4
	I0916 10:58:30.567453  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:30.567457  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:30.567460  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:30.568137  175223 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"994"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90693 chars]
	I0916 10:58:30.570756  175223 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:30.570857  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:30.570869  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:30.570880  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:30.570888  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:30.573018  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:30.573036  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:30.573042  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:30 GMT
	I0916 10:58:30.573046  175223 round_trippers.go:580]     Audit-Id: fb3d1b15-4fd2-48b5-81fe-721ae4878712
	I0916 10:58:30.573049  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:30.573053  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:30.573059  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:30.573065  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:30.573184  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:30.573827  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:30.573845  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:30.573856  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:30.573861  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:30.575679  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:30.575695  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:30.575701  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:30.575705  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:30 GMT
	I0916 10:58:30.575708  175223 round_trippers.go:580]     Audit-Id: cc8fba80-d355-4fba-9c4c-387898fd685c
	I0916 10:58:30.575711  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:30.575714  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:30.575718  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:30.575849  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:31.071569  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:31.071600  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:31.071611  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.071618  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.074006  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:31.074030  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:31.074038  175223 round_trippers.go:580]     Audit-Id: 49028134-b8d3-4579-810e-75865bdd140e
	I0916 10:58:31.074043  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.074048  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.074054  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:31.074058  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:31.074063  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.074203  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:31.074701  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:31.074718  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:31.074728  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.074733  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.076484  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:31.076505  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:31.076514  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.076522  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.076526  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:31.076530  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:31.076536  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.076540  175223 round_trippers.go:580]     Audit-Id: 706e6a91-5e27-4c95-aa6f-805757942f16
	I0916 10:58:31.076679  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:31.571320  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:31.571344  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:31.571352  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.571355  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.573884  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:31.573909  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:31.573918  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.573923  175223 round_trippers.go:580]     Audit-Id: d1377ee1-3eef-4678-b618-bb4117c11a30
	I0916 10:58:31.573927  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.573932  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.573937  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:31.573941  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:31.574117  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:31.574714  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:31.574732  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:31.574743  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.574748  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.576620  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:31.576635  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:31.576641  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:31.576645  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:31.576647  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.576650  175223 round_trippers.go:580]     Audit-Id: bbe12aea-612a-41cf-a2dc-ddc8f0841b33
	I0916 10:58:31.576653  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.576655  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.576822  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:32.071541  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:32.071565  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:32.071575  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.071581  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.073910  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:32.073933  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:32.073939  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.073943  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:32.073946  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:32.073952  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.073954  175223 round_trippers.go:580]     Audit-Id: 16a845b4-6e2b-456e-9387-4c93795f9ffd
	I0916 10:58:32.073957  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.074140  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:32.074753  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:32.074771  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:32.074781  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.074787  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.076572  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:32.076584  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:32.076590  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.076595  175223 round_trippers.go:580]     Audit-Id: b5290699-d4d0-43cd-990b-d95f0a9418f7
	I0916 10:58:32.076597  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.076600  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.076603  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:32.076605  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:32.076762  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:32.571345  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:32.571371  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:32.571382  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.571384  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.573927  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:32.573950  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:32.573960  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.573966  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:32.573970  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:32.573974  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.573980  175223 round_trippers.go:580]     Audit-Id: 579a9517-63d9-4d7a-9003-8d36938b264f
	I0916 10:58:32.573984  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.574122  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:32.574726  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:32.574745  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:32.574753  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.574764  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.576944  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:32.576973  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:32.576985  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.576991  175223 round_trippers.go:580]     Audit-Id: d708444f-e458-42e7-abf0-c1d8a9f5266b
	I0916 10:58:32.576998  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.577003  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.577009  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:32.577013  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:32.577131  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:32.577538  175223 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:33.071685  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:33.071709  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:33.071717  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.071722  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.074496  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:33.074522  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:33.074530  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.074535  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.074540  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:33.074544  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:33.074553  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.074557  175223 round_trippers.go:580]     Audit-Id: 53404c20-6ce2-44c4-8bfc-52b346ecceb5
	I0916 10:58:33.074728  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:33.075243  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:33.075260  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:33.075270  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.075275  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.076962  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:33.076984  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:33.076996  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:33.077001  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.077005  175223 round_trippers.go:580]     Audit-Id: f6842f31-b243-4bcf-b22c-aa552485bd17
	I0916 10:58:33.077008  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.077012  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.077016  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:33.077107  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:33.571870  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:33.571896  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:33.571904  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.571908  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.574383  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:33.574409  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:33.574418  175223 round_trippers.go:580]     Audit-Id: be8aac65-c42b-497d-84f6-d9870bbdf83c
	I0916 10:58:33.574424  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.574428  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.574432  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:33.574436  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:33.574441  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.574621  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:33.575239  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:33.575256  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:33.575267  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.575273  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.577028  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:33.577051  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:33.577060  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.577065  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.577070  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:33.577074  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:33.577079  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.577084  175223 round_trippers.go:580]     Audit-Id: b6538f54-4683-4d2b-8858-67f5b237f0ca
	I0916 10:58:33.577257  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:34.071913  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:34.071937  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:34.071944  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.071948  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.074134  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:34.074156  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:34.074165  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.074169  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:34.074173  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:34.074179  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.074182  175223 round_trippers.go:580]     Audit-Id: 77d47377-04dc-4943-9f88-11e6ae7c19e8
	I0916 10:58:34.074186  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.074411  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:34.074870  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:34.074883  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:34.074890  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.074893  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.076608  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:34.076628  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:34.076638  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.076643  175223 round_trippers.go:580]     Audit-Id: bb9f514b-461d-4b49-a70c-8636794e2af0
	I0916 10:58:34.076650  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.076654  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.076658  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:34.076672  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:34.076841  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:34.571632  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:34.571662  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:34.571674  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.571681  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.574084  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:34.574106  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:34.574111  175223 round_trippers.go:580]     Audit-Id: bbe2ce10-bf7e-4872-8fd6-b321f07ea944
	I0916 10:58:34.574116  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.574119  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.574122  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:34.574124  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:34.574128  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.574306  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:34.574769  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:34.574781  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:34.574789  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.574792  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.576685  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:34.576701  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:34.576707  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:34.576711  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.576715  175223 round_trippers.go:580]     Audit-Id: a88fa036-bcfa-4765-8992-b3e06032de3f
	I0916 10:58:34.576719  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.576722  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.576725  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:34.576871  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:35.071803  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:35.071827  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:35.071835  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.071840  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.074167  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:35.074191  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:35.074200  175223 round_trippers.go:580]     Audit-Id: 5e821bce-9117-4404-8d6d-ab1dc25dbb0d
	I0916 10:58:35.074205  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.074210  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.074214  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:35.074217  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:35.074221  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.074446  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:35.074989  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:35.075005  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:35.075014  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.075018  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.076977  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:35.076997  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:35.077006  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:35.077011  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:35.077013  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.077019  175223 round_trippers.go:580]     Audit-Id: e3b77bc8-b71d-4afd-8163-1c2fd40ebe85
	I0916 10:58:35.077023  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.077028  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.077145  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:35.077542  175223 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:35.571860  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:35.571883  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:35.571892  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.571896  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.574087  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:35.574111  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:35.574119  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:35.574124  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.574127  175223 round_trippers.go:580]     Audit-Id: edfa8b7f-2bba-4abb-b625-dc834cf0b37a
	I0916 10:58:35.574129  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.574133  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.574136  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:35.574327  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:35.574835  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:35.574849  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:35.574856  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.574860  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.576762  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:35.576780  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:35.576787  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.576791  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.576795  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:35.576799  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:35.576803  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.576807  175223 round_trippers.go:580]     Audit-Id: 323f42e2-52cd-4267-b061-2c8967b4a4ca
	I0916 10:58:35.576942  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:36.071661  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:36.071699  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:36.071707  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.071712  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.074023  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:36.074051  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:36.074059  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.074063  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:36.074067  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:36.074071  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.074074  175223 round_trippers.go:580]     Audit-Id: ef55fef6-8785-404b-ad86-902df24c4c3f
	I0916 10:58:36.074076  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.074286  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:36.074751  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:36.074762  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:36.074770  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.074773  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.076548  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:36.076565  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:36.076573  175223 round_trippers.go:580]     Audit-Id: 0a69e368-3d52-4424-91fa-9eb78f8b3595
	I0916 10:58:36.076579  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.076582  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.076587  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:36.076591  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:36.076596  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.076780  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:36.571318  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:36.571341  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:36.571349  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.571353  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.575513  175223 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:58:36.575537  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:36.575544  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.575547  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.575551  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:36.575554  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:36.575556  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.575560  175223 round_trippers.go:580]     Audit-Id: 7b1985c4-d5f4-4b01-991a-be509836e4d5
	I0916 10:58:36.575693  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:36.576165  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:36.576180  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:36.576187  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.576190  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.578113  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:36.578137  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:36.578146  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.578151  175223 round_trippers.go:580]     Audit-Id: 46d8a58c-8f6e-474e-92b8-b51ebf9153a5
	I0916 10:58:36.578156  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.578161  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.578164  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:36.578169  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:36.578258  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:37.071966  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:37.071990  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:37.071998  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.072002  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.074418  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:37.074438  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:37.074444  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:37.074449  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.074452  175223 round_trippers.go:580]     Audit-Id: bd947a2e-1919-41f8-871d-1708c4034833
	I0916 10:58:37.074455  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.074458  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.074461  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:37.074597  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:37.075201  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:37.075220  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:37.075231  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.075238  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.076988  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:37.077008  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:37.077016  175223 round_trippers.go:580]     Audit-Id: c29dffde-6de0-4b26-9d87-90d6c02515bc
	I0916 10:58:37.077021  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.077026  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.077032  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:37.077036  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:37.077038  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.077134  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:37.571844  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:37.571874  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:37.571885  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.571891  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.574289  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:37.574315  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:37.574325  175223 round_trippers.go:580]     Audit-Id: 70f36ff9-19d4-4756-94dd-9c58ec51f6ef
	I0916 10:58:37.574329  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.574334  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.574340  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:37.574347  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:37.574351  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.574482  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:37.575063  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:37.575084  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:37.575092  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.575096  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.576934  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:37.576955  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:37.576964  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.576971  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.576977  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:37.576982  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:37.576986  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.576989  175223 round_trippers.go:580]     Audit-Id: 8a36c65e-bffb-4f17-b1ea-0313685ad7d5
	I0916 10:58:37.577103  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:37.577444  175223 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:38.071280  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:38.071304  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:38.071312  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.071316  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.073611  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:38.073635  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:38.073644  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.073651  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:38.073655  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:38.073661  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.073667  175223 round_trippers.go:580]     Audit-Id: 266523e4-0ce4-4671-9ef8-5df63ef4aff3
	I0916 10:58:38.073671  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.073779  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:38.074223  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:38.074235  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:38.074242  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.074247  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.075917  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:38.075939  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:38.075948  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:38.075953  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:38.075958  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.075963  175223 round_trippers.go:580]     Audit-Id: 49ddfdd4-8477-41be-9ccc-4a1c99b71eac
	I0916 10:58:38.075970  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.075974  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.076076  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:38.571768  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:38.571791  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:38.571798  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.571804  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.574350  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:38.574375  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:38.574385  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.574392  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.574399  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:38.574404  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:38.574408  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.574412  175223 round_trippers.go:580]     Audit-Id: 60127c85-fb21-4238-a137-033673bd0426
	I0916 10:58:38.574535  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:38.575139  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:38.575156  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:38.575166  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.575172  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.576919  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:38.576940  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:38.576947  175223 round_trippers.go:580]     Audit-Id: dfba5010-120f-4e8a-bae1-fbd8d080ff5a
	I0916 10:58:38.576950  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.576953  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.576957  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:38.576960  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:38.576963  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.577052  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:39.071760  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:39.071784  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:39.071792  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.071795  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.074244  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:39.074272  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:39.074281  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.074287  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:39.074292  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:39.074296  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.074301  175223 round_trippers.go:580]     Audit-Id: 81bf4e00-b937-49ac-a332-9031a853fba9
	I0916 10:58:39.074306  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.074451  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:39.074945  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:39.074959  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:39.074966  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.074970  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.076679  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:39.076696  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:39.076703  175223 round_trippers.go:580]     Audit-Id: 8cdd7e1c-cd85-40b7-b9da-ca85679c0918
	I0916 10:58:39.076707  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.076710  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.076714  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:39.076717  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:39.076720  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.076949  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:39.571681  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:39.571705  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:39.571713  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.571716  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.573940  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:39.573969  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:39.573978  175223 round_trippers.go:580]     Audit-Id: e7b25956-63df-4c54-88cd-6bb6657b34b6
	I0916 10:58:39.573984  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.573989  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.574049  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:39.574057  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:39.574067  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.574202  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:39.574819  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:39.574838  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:39.574846  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.574850  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.579422  175223 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:58:39.579444  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:39.579453  175223 round_trippers.go:580]     Audit-Id: a5d590eb-d0c6-4754-b005-fa0716e0129c
	I0916 10:58:39.579458  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.579462  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.579467  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:39.579471  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:39.579477  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.579587  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:39.579929  175223 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:40.071219  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:40.071243  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:40.071251  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.071255  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.073480  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:40.073502  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:40.073514  175223 round_trippers.go:580]     Audit-Id: e5724bbb-5f39-47bf-b21c-a56a174e277d
	I0916 10:58:40.073519  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.073521  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.073524  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:40.073527  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:40.073530  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.073758  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:40.074209  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:40.074222  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:40.074229  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.074233  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.076010  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:40.076034  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:40.076044  175223 round_trippers.go:580]     Audit-Id: 0854632e-255e-431f-bd18-5bc2ed1453eb
	I0916 10:58:40.076049  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.076056  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.076059  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:40.076064  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:40.076070  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.076183  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:40.571778  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:40.571806  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:40.571821  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.571828  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.574115  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:40.574142  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:40.574152  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:40.574163  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.574167  175223 round_trippers.go:580]     Audit-Id: 3c3c8de2-d5eb-46f4-99e6-31bf44441e66
	I0916 10:58:40.574172  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.574176  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.574182  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:40.574326  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:40.574804  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:40.574819  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:40.574828  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.574834  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.576664  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:40.576694  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:40.576701  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.576705  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:40.576708  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:40.576711  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.576714  175223 round_trippers.go:580]     Audit-Id: 9aa41ae6-fa77-4fed-8363-6838ed427d2a
	I0916 10:58:40.576718  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.576879  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:41.071519  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:41.071544  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:41.071552  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.071555  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.074115  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:41.074138  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:41.074147  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.074152  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:41.074155  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:41.074160  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.074163  175223 round_trippers.go:580]     Audit-Id: 82ce6d1d-3561-4d0c-a363-6099d52e7268
	I0916 10:58:41.074168  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.074371  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:41.074852  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:41.074866  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:41.074872  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.074876  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.076921  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:41.076940  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:41.076947  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:41.076952  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.076957  175223 round_trippers.go:580]     Audit-Id: de70a149-e94a-4538-bbb2-38e2adaae921
	I0916 10:58:41.076962  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.076965  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.076970  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:41.077069  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:41.571820  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:41.571861  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:41.571869  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.571873  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.574156  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:41.574183  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:41.574192  175223 round_trippers.go:580]     Audit-Id: 955c4dbe-6ca2-4d83-898f-352ea3705fac
	I0916 10:58:41.574197  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.574202  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.574208  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:41.574212  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:41.574216  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.574414  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:41.574933  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:41.574949  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:41.574956  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.574960  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.576698  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:41.576715  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:41.576722  175223 round_trippers.go:580]     Audit-Id: 20f752f5-4d09-445f-96df-a761a39636f6
	I0916 10:58:41.576727  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.576731  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.576733  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:41.576736  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:41.576740  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.576866  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:42.071608  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:42.071637  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:42.071646  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.071649  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.074146  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:42.074173  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:42.074183  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.074189  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.074195  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:42.074199  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:42.074203  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.074207  175223 round_trippers.go:580]     Audit-Id: 5b48919e-0dea-4965-9164-adceccd6d48b
	I0916 10:58:42.074358  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:42.074816  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:42.074829  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:42.074837  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.074840  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.076555  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:42.076576  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:42.076584  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.076588  175223 round_trippers.go:580]     Audit-Id: baf3e564-d759-40a5-a332-3dce00a1d3ee
	I0916 10:58:42.076591  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.076593  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.076596  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:42.076599  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:42.076779  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:42.077075  175223 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:42.571394  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:42.571419  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:42.571431  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.571440  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.573838  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:42.573867  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:42.573878  175223 round_trippers.go:580]     Audit-Id: 60a1b318-4007-4fb3-8087-e950ac6b92f3
	I0916 10:58:42.573884  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.573889  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.573895  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:42.573901  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:42.573908  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.574083  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:42.574547  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:42.574563  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:42.574569  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.574575  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.576409  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:42.576424  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:42.576430  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:42.576433  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.576436  175223 round_trippers.go:580]     Audit-Id: 3ab5a0bb-4722-4a98-ac39-1c5d52fd9dce
	I0916 10:58:42.576438  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.576441  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.576445  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:42.576530  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:43.071364  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:43.071393  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:43.071400  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.071404  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.073995  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:43.074022  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:43.074030  175223 round_trippers.go:580]     Audit-Id: c892b58e-5cda-4666-8f12-fe2972910038
	I0916 10:58:43.074034  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.074038  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.074041  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:43.074046  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:43.074049  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.074373  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:43.074834  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:43.074847  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:43.074854  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.074858  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.076615  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:43.076630  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:43.076636  175223 round_trippers.go:580]     Audit-Id: 6ff6be2e-b100-496a-b402-9cbc85b6c029
	I0916 10:58:43.076639  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.076643  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.076647  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:43.076649  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:43.076651  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.076783  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:43.571426  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:43.571456  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:43.571465  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.571471  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.573736  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:43.573768  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:43.573779  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:43.573786  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:43.573794  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.573801  175223 round_trippers.go:580]     Audit-Id: 64aa2eaa-7e87-4261-9e37-f8c099d1775f
	I0916 10:58:43.573818  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.573830  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.573978  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:43.574474  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:43.574491  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:43.574498  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.574502  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.576250  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:43.576268  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:43.576279  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.576286  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:43.576292  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:43.576297  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.576301  175223 round_trippers.go:580]     Audit-Id: 9e02f41d-67a6-4f2e-b9dd-b2eb57a215f4
	I0916 10:58:43.576305  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.576426  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:44.071090  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:44.071117  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:44.071125  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.071133  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.073501  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:44.073529  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:44.073539  175223 round_trippers.go:580]     Audit-Id: f8a3858b-bdb9-4284-a1b5-c482c621fa82
	I0916 10:58:44.073546  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.073551  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.073556  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:44.073561  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:44.073567  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.073767  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:44.074285  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:44.074299  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:44.074306  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.074310  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.075989  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:44.076017  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:44.076027  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:44.076034  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.076039  175223 round_trippers.go:580]     Audit-Id: d4dd9435-050f-4fcb-aa03-29bdae35dc33
	I0916 10:58:44.076043  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.076048  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.076052  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:44.076198  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:44.571920  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:44.571947  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:44.571955  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.571959  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.574493  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:44.574515  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:44.574524  175223 round_trippers.go:580]     Audit-Id: 01b32d93-bcbc-4734-8ec5-93535cae05b5
	I0916 10:58:44.574528  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.574534  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.574538  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:44.574541  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:44.574545  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.574721  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:44.575320  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:44.575338  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:44.575349  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.575356  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.577014  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:44.577046  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:44.577054  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.577061  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:44.577066  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:44.577072  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.577077  175223 round_trippers.go:580]     Audit-Id: 845a3508-a46e-4200-b2ea-eca81b5efd2d
	I0916 10:58:44.577086  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.577218  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:44.577534  175223 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:45.071980  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:45.072005  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:45.072014  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.072017  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.074291  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:45.074318  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:45.074328  175223 round_trippers.go:580]     Audit-Id: fe4882da-af3e-41dc-8708-eb00d2c3095a
	I0916 10:58:45.074335  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.074340  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.074346  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:45.074350  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:45.074355  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.074487  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:45.074975  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:45.074991  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:45.075003  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.075008  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.076730  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:45.076751  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:45.076759  175223 round_trippers.go:580]     Audit-Id: a145d4ea-dc8e-4c7b-ba91-574a45971822
	I0916 10:58:45.076764  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.076769  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.076774  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:45.076782  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:45.076788  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.076933  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:45.571669  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:45.571694  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:45.571702  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.571706  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.574170  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:45.574194  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:45.574201  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:45.574205  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.574209  175223 round_trippers.go:580]     Audit-Id: 40a4ebc7-5646-4c2a-a2a3-d2f4e07f3745
	I0916 10:58:45.574212  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.574216  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.574219  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:45.574383  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:45.574838  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:45.574850  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:45.574857  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.574861  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.576739  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:45.576754  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:45.576761  175223 round_trippers.go:580]     Audit-Id: 7aada23e-a2fc-4d91-bf03-3296db83569b
	I0916 10:58:45.576765  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.576768  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.576772  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:45.576775  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:45.576778  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.576982  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:46.071413  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:46.071436  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:46.071444  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.071448  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.075753  175223 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:58:46.075790  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:46.075813  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:46.075822  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.075835  175223 round_trippers.go:580]     Audit-Id: 12a62f39-af9b-4401-8edc-0e8769b05927
	I0916 10:58:46.075840  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.075850  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.075861  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:46.076038  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:46.076729  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:46.076752  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:46.076763  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.076769  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.078497  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:46.078518  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:46.078531  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:46.078537  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.078542  175223 round_trippers.go:580]     Audit-Id: 00045b57-4e63-46dd-841d-10701b56156f
	I0916 10:58:46.078546  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.078550  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.078553  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:46.078699  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:46.571036  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:46.571060  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:46.571071  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.571077  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.573479  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:46.573508  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:46.573518  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.573523  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:46.573530  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:46.573534  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.573538  175223 round_trippers.go:580]     Audit-Id: 31d6d1b0-8172-4321-81e4-0be29fe48d0c
	I0916 10:58:46.573543  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.573759  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:46.574265  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:46.574282  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:46.574292  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.574297  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.576147  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:46.576166  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:46.576176  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.576183  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.576189  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:46.576194  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:46.576200  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.576206  175223 round_trippers.go:580]     Audit-Id: 3510fb75-897a-4996-b498-cb898b489163
	I0916 10:58:46.576330  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:47.071956  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:47.071983  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:47.071990  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.071994  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.074215  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:47.074240  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:47.074251  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:47.074256  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:47.074260  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.074263  175223 round_trippers.go:580]     Audit-Id: 29653239-9d1d-481b-a823-6466093abfd6
	I0916 10:58:47.074267  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.074271  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.074369  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:47.074814  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:47.074825  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:47.074835  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.074839  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.076692  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.076707  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:47.076713  175223 round_trippers.go:580]     Audit-Id: 7e25e408-ab78-4bea-a110-6f4d3786b607
	I0916 10:58:47.076719  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.076722  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.076726  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:47.076729  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:47.076732  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.076876  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:47.077187  175223 pod_ready.go:103] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:47.571620  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:47.571649  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:47.571657  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.571661  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.574143  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:47.574164  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:47.574170  175223 round_trippers.go:580]     Audit-Id: 810b6cbc-669d-4ecf-8d28-142de6959ea1
	I0916 10:58:47.574177  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.574182  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.574187  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:47.574190  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:47.574194  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.574296  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:47.574781  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:47.574796  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:47.574803  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.574806  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.576486  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.576503  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:47.576512  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.576517  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.576522  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:47.576527  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:47.576532  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.576536  175223 round_trippers.go:580]     Audit-Id: 88d583f7-c1d3-47b2-a3ea-32783e70efee
	I0916 10:58:47.576666  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:48.071799  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:48.071824  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:48.071832  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:48.071837  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:48.074226  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:48.074254  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:48.074302  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:48.074332  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:48.074337  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:48 GMT
	I0916 10:58:48.074341  175223 round_trippers.go:580]     Audit-Id: f8a02404-4598-4712-9e48-146d9ed19ec8
	I0916 10:58:48.074343  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:48.074347  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:48.074487  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:48.075000  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:48.075017  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:48.075024  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:48.075027  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:48.076843  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:48.076864  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:48.076874  175223 round_trippers.go:580]     Audit-Id: b786eda4-62dc-40e9-a4a4-fdc2a50035b0
	I0916 10:58:48.076881  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:48.076887  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:48.076891  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:48.076896  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:48.076900  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:48 GMT
	I0916 10:58:48.077050  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:48.571616  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:48.571645  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:48.571653  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:48.571657  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:48.574120  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:48.574144  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:48.574151  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:48.574160  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:48.574164  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:48.574168  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:48 GMT
	I0916 10:58:48.574171  175223 round_trippers.go:580]     Audit-Id: c33afca5-529e-43c2-945c-7620a9ade04f
	I0916 10:58:48.574174  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:48.574330  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"969","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7042 chars]
	I0916 10:58:48.574820  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:48.574832  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:48.574839  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:48.574843  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:48.576957  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:48.576974  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:48.576980  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:48.576986  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:48 GMT
	I0916 10:58:48.576989  175223 round_trippers.go:580]     Audit-Id: e7b882b5-5102-4713-9d0c-33194f3be6dd
	I0916 10:58:48.576991  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:48.576994  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:48.576998  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:48.577085  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:49.071788  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s82cx
	I0916 10:58:49.071814  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.071823  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.071826  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.074302  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:49.074332  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.074341  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.074346  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.074351  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.074356  175223 round_trippers.go:580]     Audit-Id: 12d90eee-29b3-4c3c-93ca-8dcd52abed64
	I0916 10:58:49.074360  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.074365  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.074576  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s82cx","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"85130138-c50d-47a8-8bbe-de91bb9a0472","resourceVersion":"1056","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"d9cfd30e-6788-4004-a9bc-9988e36ab640","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cfd30e-6788-4004-a9bc-9988e36ab640\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6814 chars]
	I0916 10:58:49.075043  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:49.075057  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.075064  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.075069  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.076798  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.076820  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.076830  175223 round_trippers.go:580]     Audit-Id: 5c8b678e-28ad-41be-91c9-e4753ec9eff5
	I0916 10:58:49.076836  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.076840  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.076849  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.076859  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.076866  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.076974  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:49.077321  175223 pod_ready.go:93] pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:49.077371  175223 pod_ready.go:82] duration metric: took 18.506587734s for pod "coredns-7c65d6cfc9-s82cx" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.077390  175223 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.077450  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-026168
	I0916 10:58:49.077462  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.077473  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.077479  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.079248  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.079265  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.079271  175223 round_trippers.go:580]     Audit-Id: b55a83b0-d506-40cb-a576-4ca5a8e37298
	I0916 10:58:49.079276  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.079279  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.079282  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.079285  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.079288  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.079447  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-026168","namespace":"kube-system","uid":"7221a4cc-7e2d-41a3-b83b-579646af2de2","resourceVersion":"984","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.mirror":"092d7e072df24e58fb10434e76d508b1","kubernetes.io/config.seen":"2024-09-16T10:53:34.315832212Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6575 chars]
	I0916 10:58:49.079931  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:49.079951  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.079959  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.079971  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.081618  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.081634  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.081639  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.081643  175223 round_trippers.go:580]     Audit-Id: 442cd4fd-dd52-4545-8863-217427718c6c
	I0916 10:58:49.081646  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.081653  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.081657  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.081662  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.081771  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:49.082056  175223 pod_ready.go:93] pod "etcd-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:49.082069  175223 pod_ready.go:82] duration metric: took 4.672289ms for pod "etcd-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.082084  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.082132  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-026168
	I0916 10:58:49.082139  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.082146  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.082150  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.083901  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.083919  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.083928  175223 round_trippers.go:580]     Audit-Id: 8411fb99-6040-44a1-a34b-876ace33bf06
	I0916 10:58:49.083934  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.083938  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.083941  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.083944  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.083947  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.084141  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-026168","namespace":"kube-system","uid":"e0a10f33-efc2-4f2d-b46c-bdb68cf664ce","resourceVersion":"986","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.mirror":"4bba00449db3cc3f6a87928cc07bfcdd","kubernetes.io/config.seen":"2024-09-16T10:53:34.315835780Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 9107 chars]
	I0916 10:58:49.084765  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:49.084787  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.084798  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.084813  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.086404  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.086421  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.086428  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.086436  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.086440  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.086443  175223 round_trippers.go:580]     Audit-Id: 799707d3-08f6-4fd6-ac3d-cedf33c507ce
	I0916 10:58:49.086446  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.086450  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.086584  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:49.086966  175223 pod_ready.go:93] pod "kube-apiserver-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:49.086986  175223 pod_ready.go:82] duration metric: took 4.891695ms for pod "kube-apiserver-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.086995  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.087041  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-026168
	I0916 10:58:49.087045  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.087052  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.087055  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.088775  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.088787  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.088793  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.088796  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.088799  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.088803  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.088806  175223 round_trippers.go:580]     Audit-Id: bd98772b-4a68-4db7-8a9a-5c530a80f337
	I0916 10:58:49.088809  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.088945  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-026168","namespace":"kube-system","uid":"c0b53919-27a0-4a54-ba15-a530a06dbf0d","resourceVersion":"990","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.mirror":"b6fa2c2ed4d3d94b12230682ab2a118d","kubernetes.io/config.seen":"2024-09-16T10:53:34.315836809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8897 chars]
	I0916 10:58:49.089355  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:49.089370  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.089379  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.089384  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.090944  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.090962  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.090972  175223 round_trippers.go:580]     Audit-Id: 2b8a536a-08e5-44b9-a718-4f460f27d759
	I0916 10:58:49.090977  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.090981  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.090987  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.090998  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.091003  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.091096  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:49.091407  175223 pod_ready.go:93] pod "kube-controller-manager-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:49.091425  175223 pod_ready.go:82] duration metric: took 4.42451ms for pod "kube-controller-manager-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.091439  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.091496  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6p6vt
	I0916 10:58:49.091508  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.091518  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.091525  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.093130  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.093150  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.093159  175223 round_trippers.go:580]     Audit-Id: c7e8ade4-aaa8-4d6a-be74-2f84a174654e
	I0916 10:58:49.093164  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.093167  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.093175  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.093179  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.093186  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.093289  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6p6vt","generateName":"kube-proxy-","namespace":"kube-system","uid":"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c","resourceVersion":"967","creationTimestamp":"2024-09-16T10:53:39Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6170 chars]
	I0916 10:58:49.093808  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:49.093824  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.093834  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.093840  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.095308  175223 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:49.095333  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.095341  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.095344  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.095348  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.095351  175223 round_trippers.go:580]     Audit-Id: c9300f42-c7b5-48e1-a90a-331c00ada266
	I0916 10:58:49.095354  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.095358  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.095533  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:49.095807  175223 pod_ready.go:93] pod "kube-proxy-6p6vt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:49.095820  175223 pod_ready.go:82] duration metric: took 4.37532ms for pod "kube-proxy-6p6vt" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.095829  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.272260  175223 request.go:632] Waited for 176.357177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:58:49.272362  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g86bs
	I0916 10:58:49.272373  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.272382  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.272393  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.274700  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:49.274718  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.274724  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.274728  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.274731  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.274733  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.274737  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.274740  175223 round_trippers.go:580]     Audit-Id: d1bdef6f-f3f5-4c01-9da2-4c1780324d8c
	I0916 10:58:49.274870  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g86bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"efc5e34d-fd17-408e-ad74-cd36ded784b3","resourceVersion":"871","creationTimestamp":"2024-09-16T10:55:06Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6178 chars]
	I0916 10:58:49.472746  175223 request.go:632] Waited for 197.396135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:58:49.472816  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m03
	I0916 10:58:49.472821  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.472829  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.472834  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.475033  175223 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:58:49.475053  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.475069  175223 round_trippers.go:580]     Audit-Id: 925c031e-21ba-44eb-98ae-53bcd3227b39
	I0916 10:58:49.475076  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.475081  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.475088  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.475092  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.475095  175223 round_trippers.go:580]     Content-Length: 210
	I0916 10:58:49.475098  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.475121  175223 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-026168-m03\" not found","reason":"NotFound","details":{"name":"multinode-026168-m03","kind":"nodes"},"code":404}
	I0916 10:58:49.475206  175223 pod_ready.go:98] node "multinode-026168-m03" hosting pod "kube-proxy-g86bs" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-026168-m03": nodes "multinode-026168-m03" not found
	I0916 10:58:49.475221  175223 pod_ready.go:82] duration metric: took 379.38366ms for pod "kube-proxy-g86bs" in "kube-system" namespace to be "Ready" ...
	E0916 10:58:49.475232  175223 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-026168-m03" hosting pod "kube-proxy-g86bs" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-026168-m03": nodes "multinode-026168-m03" not found
	I0916 10:58:49.475242  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.672558  175223 request.go:632] Waited for 197.226297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:58:49.672616  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qds2d
	I0916 10:58:49.672621  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.672629  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.672633  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.675009  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:49.675039  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.675049  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.675057  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.675064  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.675070  175223 round_trippers.go:580]     Audit-Id: 9cf93259-25b8-4ae0-b563-db9ddeba2587
	I0916 10:58:49.675076  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.675082  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.675220  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qds2d","generateName":"kube-proxy-","namespace":"kube-system","uid":"ac30bd54-b932-4f52-a53c-4edbc5eefc7c","resourceVersion":"1050","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d4c668f3-49d4-42ff-b17e-89092925a639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c668f3-49d4-42ff-b17e-89092925a639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6179 chars]
	I0916 10:58:49.871979  175223 request.go:632] Waited for 196.305865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:58:49.872051  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168-m02
	I0916 10:58:49.872059  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:49.872070  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:49.872079  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:49.874593  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:49.874618  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:49.874627  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:49.874633  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:49.874639  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:49.874644  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:49.874648  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:49 GMT
	I0916 10:58:49.874652  175223 round_trippers.go:580]     Audit-Id: 95f7c748-a2ec-4b5a-a173-1f9ed9859006
	I0916 10:58:49.874762  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168-m02","uid":"56a8a566-f76b-401e-8fe6-92e5cb3b42f4","resourceVersion":"1021","creationTimestamp":"2024-09-16T10:54:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_54_36_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f
:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"mana [truncated 6053 chars]
	I0916 10:58:49.875104  175223 pod_ready.go:93] pod "kube-proxy-qds2d" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:49.875123  175223 pod_ready.go:82] duration metric: took 399.871779ms for pod "kube-proxy-qds2d" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:49.875136  175223 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:50.072305  175223 request.go:632] Waited for 197.099743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:50.072368  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-026168
	I0916 10:58:50.072374  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:50.072381  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:50.072383  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:50.074859  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:50.074884  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:50.074894  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:50.074899  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:50.074906  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:50 GMT
	I0916 10:58:50.074911  175223 round_trippers.go:580]     Audit-Id: e019f126-6c64-4892-a41a-3b700eff68e1
	I0916 10:58:50.074916  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:50.074922  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:50.075024  175223 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-026168","namespace":"kube-system","uid":"b293178b-0aac-457b-b950-71fdd2c8fa80","resourceVersion":"988","creationTimestamp":"2024-09-16T10:53:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.mirror":"e36911f70e4c774f0aa751a49e1481ae","kubernetes.io/config.seen":"2024-09-16T10:53:34.315837612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:53:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5101 chars]
	I0916 10:58:50.272796  175223 request.go:632] Waited for 197.413028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:50.272882  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-026168
	I0916 10:58:50.272887  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:50.272894  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:50.272898  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:50.275139  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:50.275163  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:50.275170  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:50.275174  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:50.275179  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:50.275183  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:50.275188  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:50 GMT
	I0916 10:58:50.275193  175223 round_trippers.go:580]     Audit-Id: e65f3fc0-b212-43ab-a279-88d165824afe
	I0916 10:58:50.275376  175223 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T10:53:31Z","fieldsType":"FieldsV1","fiel [truncated 6264 chars]
	I0916 10:58:50.275700  175223 pod_ready.go:93] pod "kube-scheduler-multinode-026168" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:50.275719  175223 pod_ready.go:82] duration metric: took 400.575851ms for pod "kube-scheduler-multinode-026168" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:50.275734  175223 pod_ready.go:39] duration metric: took 19.711282765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:58:50.275756  175223 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:58:50.275804  175223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:58:50.287105  175223 system_svc.go:56] duration metric: took 11.340688ms WaitForService to wait for kubelet
	I0916 10:58:50.287134  175223 kubeadm.go:582] duration metric: took 19.823104428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:58:50.287166  175223 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:58:50.472640  175223 request.go:632] Waited for 185.391164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:58:50.472736  175223 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:58:50.472747  175223 round_trippers.go:469] Request Headers:
	I0916 10:58:50.472759  175223 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:50.472768  175223 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:50.475542  175223 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:50.475638  175223 round_trippers.go:577] Response Headers:
	I0916 10:58:50.475650  175223 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:50.475655  175223 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b19ad8bf-c726-4548-9a0d-c2225108f365
	I0916 10:58:50.475660  175223 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24477166-1830-4b05-814d-1ee195bdcde3
	I0916 10:58:50.475675  175223 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:50 GMT
	I0916 10:58:50.475679  175223 round_trippers.go:580]     Audit-Id: 5b6ba3b8-99e9-4cbc-a124-cdfd1db93f8b
	I0916 10:58:50.475685  175223 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:50.475901  175223 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1063"},"items":[{"metadata":{"name":"multinode-026168","uid":"1c9fe2de-bd3a-4638-af05-df4e6e743e4c","resourceVersion":"904","creationTimestamp":"2024-09-16T10:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-026168","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-026168","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_53_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 13363 chars]
	I0916 10:58:50.476400  175223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:58:50.476419  175223 node_conditions.go:123] node cpu capacity is 8
	I0916 10:58:50.476430  175223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:58:50.476435  175223 node_conditions.go:123] node cpu capacity is 8
	I0916 10:58:50.476441  175223 node_conditions.go:105] duration metric: took 189.2687ms to run NodePressure ...
	I0916 10:58:50.476457  175223 start.go:241] waiting for startup goroutines ...
	I0916 10:58:50.476492  175223 start.go:255] writing updated cluster config ...
	I0916 10:58:50.476789  175223 ssh_runner.go:195] Run: rm -f paused
	I0916 10:58:50.483848  175223 out.go:177] * Done! kubectl is now configured to use "multinode-026168" cluster and "default" namespace by default
	E0916 10:58:50.485381  175223 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:58:08 multinode-026168 crio[665]: time="2024-09-16 10:58:08.994703486Z" level=info msg="Started container" PID=1344 containerID=2173399a01422e0fd492b8ead76e70daf0c1aac6fee18edd3167e575f015c818 description=default/busybox-7dff88458-qt9rx/busybox id=05506226-77ad-4e35-930b-bd721c99a184 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f60c2989603268532f46a88059f9b17e07bfb1a94d1022765bc8fc4a1db25fe
	Sep 16 10:58:38 multinode-026168 conmon[1235]: conmon c7c445df782de9e94725 <ninfo>: container 1292 exited with status 1
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.490395516Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=365256a3-c328-4900-b8f9-d98c451a6ce2 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.490646250Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=365256a3-c328-4900-b8f9-d98c451a6ce2 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.491290950Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d6249b52-788d-4ebb-87e0-71d7ee18d8e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.491520569Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d6249b52-788d-4ebb-87e0-71d7ee18d8e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.492170694Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=684fada1-11b8-46d5-9e98-90855085a51a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.492286480Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.503338736Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/24f263d533ea6905430d1ac6c8ff7ab3b957d63d5c4b9ed7fc44f5655603b709/merged/etc/passwd: no such file or directory"
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.503377223Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/24f263d533ea6905430d1ac6c8ff7ab3b957d63d5c4b9ed7fc44f5655603b709/merged/etc/group: no such file or directory"
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.537581956Z" level=info msg="Created container 958db36026b68741e142b7b2a9765ba45c07166c90d0623b04a213d0c0d6029a: kube-system/storage-provisioner/storage-provisioner" id=684fada1-11b8-46d5-9e98-90855085a51a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.538167308Z" level=info msg="Starting container: 958db36026b68741e142b7b2a9765ba45c07166c90d0623b04a213d0c0d6029a" id=4fde812c-afd2-46eb-9ccd-f414434a5b07 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 10:58:39 multinode-026168 crio[665]: time="2024-09-16 10:58:39.544721543Z" level=info msg="Started container" PID=1651 containerID=958db36026b68741e142b7b2a9765ba45c07166c90d0623b04a213d0c0d6029a description=kube-system/storage-provisioner/storage-provisioner id=4fde812c-afd2-46eb-9ccd-f414434a5b07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6fe1300dd117690a81196e8f5088ddb33aa19129d4c4d1776a3e72ac9fc34923
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.494717537Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.498580454Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.498620672Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.498640267Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.502012199Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.502040373Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.502052561Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.505054103Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.505079046Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.505090265Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.508410711Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 16 10:58:49 multinode-026168 crio[665]: time="2024-09-16 10:58:49.508436626Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	958db36026b68       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       4                   6fe1300dd1176       storage-provisioner
	2173399a01422       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   43 seconds ago      Running             busybox                   2                   9f60c29896032       busybox-7dff88458-qt9rx
	ddfb4d2f53494       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   43 seconds ago      Running             coredns                   2                   dfd5a8aff3c6c       coredns-7c65d6cfc9-s82cx
	a428b757c65a9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   43 seconds ago      Running             kube-proxy                2                   7449d492c60df       kube-proxy-6p6vt
	c7c445df782de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   43 seconds ago      Exited              storage-provisioner       3                   6fe1300dd1176       storage-provisioner
	9b8367f2abcda       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   43 seconds ago      Running             kindnet-cni               2                   ff8d773cb43cf       kindnet-zv2p5
	5943f1020f1d4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   47 seconds ago      Running             kube-scheduler            2                   4ab72cdfedf14       kube-scheduler-multinode-026168
	ca0dbfcca95c0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   47 seconds ago      Running             kube-controller-manager   2                   f0f414215dcd1       kube-controller-manager-multinode-026168
	79218b756d50c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   47 seconds ago      Running             kube-apiserver            2                   87c5e52398a4b       kube-apiserver-multinode-026168
	8130296a7eafe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   47 seconds ago      Running             etcd                      3                   e141064490663       etcd-multinode-026168
	
	
	==> coredns [ddfb4d2f534948160aba40b1145a2a38ce6cd7ed2df38c71de95d9eab57c11d5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41940 - 33008 "HINFO IN 7470259827340173982.1665473935911970341. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011006332s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1889044765]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:09.028) (total time: 30001ms):
	Trace[1889044765]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:58:39.030)
	Trace[1889044765]: [30.001908336s] [30.001908336s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[680660453]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:09.028) (total time: 30001ms):
	Trace[680660453]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:58:39.030)
	Trace[680660453]: [30.001931401s] [30.001931401s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[190987320]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:09.028) (total time: 30002ms):
	Trace[190987320]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:58:39.030)
	Trace[190987320]: [30.002087025s] [30.002087025s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               multinode-026168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_53_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:53:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:58:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:58:08 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:58:08 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:58:08 +0000   Mon, 16 Sep 2024 10:53:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:58:08 +0000   Mon, 16 Sep 2024 10:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-026168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a9b154fdc5943e68705277c3212644c
	  System UUID:                8db2fd04-b5e4-4ec7-8d8e-d94280ac94a3
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qt9rx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 coredns-7c65d6cfc9-s82cx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m13s
	  kube-system                 etcd-multinode-026168                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m18s
	  kube-system                 kindnet-zv2p5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m13s
	  kube-system                 kube-apiserver-multinode-026168             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-controller-manager-multinode-026168    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-6p6vt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-multinode-026168             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m12s                  kube-proxy       
	  Normal   Starting                 43s                    kube-proxy       
	  Normal   Starting                 2m27s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m18s                  kubelet          Node multinode-026168 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 5m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    5m18s                  kubelet          Node multinode-026168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s                  kubelet          Node multinode-026168 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m14s                  node-controller  Node multinode-026168 event: Registered Node multinode-026168 in Controller
	  Normal   NodeReady                4m32s                  kubelet          Node multinode-026168 status is now: NodeReady
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node multinode-026168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node multinode-026168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x7 over 2m33s)  kubelet          Node multinode-026168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m26s                  node-controller  Node multinode-026168 event: Registered Node multinode-026168 in Controller
	  Normal   Starting                 48s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 48s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)      kubelet          Node multinode-026168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)      kubelet          Node multinode-026168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x7 over 48s)      kubelet          Node multinode-026168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           41s                    node-controller  Node multinode-026168 event: Registered Node multinode-026168 in Controller
	
	
	Name:               multinode-026168-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-026168-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-026168
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_54_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:54:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-026168-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:58:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:58:37 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:58:37 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:58:37 +0000   Mon, 16 Sep 2024 10:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:58:37 +0000   Mon, 16 Sep 2024 10:54:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-026168-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 df9aceb6d7fb4b6fae4018b0d27a3deb
	  System UUID:                50f4fbf1-c6a3-4700-a79b-bb8841197877
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z8csk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kindnet-mckv5              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m16s
	  kube-system                 kube-proxy-qds2d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   Starting                 4m13s                  kube-proxy       
	  Normal   Starting                 2m1s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  4m16s (x2 over 4m17s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m16s (x2 over 4m17s)  kubelet          Node multinode-026168-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m16s (x2 over 4m17s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m14s                  node-controller  Node multinode-026168-m02 event: Registered Node multinode-026168-m02 in Controller
	  Normal   NodeReady                4m4s                   kubelet          Node multinode-026168-m02 status is now: NodeReady
	  Normal   RegisteredNode           2m26s                  node-controller  Node multinode-026168-m02 event: Registered Node multinode-026168-m02 in Controller
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     2m14s (x7 over 2m21s)  kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m21s)   kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m21s)   kubelet          Node multinode-026168-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           41s                    node-controller  Node multinode-026168-m02 event: Registered Node multinode-026168-m02 in Controller
	  Normal   Starting                 27s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 27s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     21s (x7 over 27s)      kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  15s (x8 over 27s)      kubelet          Node multinode-026168-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 27s)      kubelet          Node multinode-026168-m02 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 10:58] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000008] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000013] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000137] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +1.004052] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [8130296a7eafedc30061c14290626cadf93e588320dec2da6ec669f9591af254] <==
	{"level":"info","ts":"2024-09-16T10:58:05.211947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-09-16T10:58:05.212074Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:58:05.212180Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:05.212243Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:05.214499Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:58:05.214848Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:58:05.214892Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:58:05.214646Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:58:05.215207Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:58:07.000183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:07.000274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:07.000314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:07.000336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:58:07.000344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-09-16T10:58:07.000357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:58:07.000367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-09-16T10:58:07.003641Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-026168 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:58:07.003708Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:07.003753Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:07.003923Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:07.003959Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:07.004783Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:07.004909Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:07.005634Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:58:07.005693Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 10:58:52 up 41 min,  0 users,  load average: 0.64, 1.09, 0.98
	Linux multinode-026168 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [9b8367f2abcda1dbc554cc97a49349b7759418b9fd9b93f5dcc57a2b6b84cac6] <==
	Trace[1878747659]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:58:39.495)
	Trace[1878747659]: [30.001493794s] [30.001493794s] END
	E0916 10:58:39.495505       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 10:58:39.495508       1 trace.go:236] Trace[1166402467]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 10:58:09.493) (total time: 30001ms):
	Trace[1166402467]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:58:39.495)
	Trace[1166402467]: [30.001543506s] [30.001543506s] END
	W0916 10:58:39.495300       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 10:58:39.495530       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 10:58:39.495285       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 10:58:39.495664       1 trace.go:236] Trace[683910338]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 10:58:09.493) (total time: 30001ms):
	Trace[683910338]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:58:39.495)
	Trace[683910338]: [30.001646616s] [30.001646616s] END
	E0916 10:58:39.495686       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 10:58:39.495597       1 trace.go:236] Trace[1032862134]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 10:58:09.493) (total time: 30001ms):
	Trace[1032862134]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:58:39.495)
	Trace[1032862134]: [30.001590516s] [30.001590516s] END
	E0916 10:58:39.495706       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 10:58:41.094586       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:58:41.094614       1 metrics.go:61] Registering metrics
	I0916 10:58:41.094680       1 controller.go:374] Syncing nftables rules
	I0916 10:58:49.494379       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:58:49.494447       1 main.go:299] handling current node
	I0916 10:58:49.496954       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:58:49.496986       1 main.go:322] Node multinode-026168-m02 has CIDR [10.244.1.0/24] 
	I0916 10:58:49.497124       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [79218b756d50cb9ff598233d958a736b8331cd31ff22e196eb67ec0b624c40e5] <==
	I0916 10:58:07.935953       1 establishing_controller.go:81] Starting EstablishingController
	I0916 10:58:07.935971       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0916 10:58:07.935980       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0916 10:58:07.935989       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0916 10:58:08.010335       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:58:08.010447       1 policy_source.go:224] refreshing policies
	I0916 10:58:08.093997       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:58:08.094124       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:58:08.094151       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:58:08.095515       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:58:08.094419       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:58:08.094438       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:58:08.094475       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:58:08.098465       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:58:08.098493       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:58:08.098506       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:58:08.098518       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:58:08.094527       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:58:08.094611       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:58:08.102452       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:58:08.108886       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:58:08.111930       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:58:08.999918       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:58:11.614567       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:58:11.664212       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ca0dbfcca95c007c098ffed52421b7ce6f34ca1418f1477523150cb7a9333ed6] <==
	I0916 10:58:11.335084       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:58:11.360472       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:58:11.360478       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:58:11.396617       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:58:11.411563       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:58:11.416044       1 shared_informer.go:320] Caches are synced for expand
	I0916 10:58:11.465536       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:58:11.489100       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:58:11.515026       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:58:11.585578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="324.385967ms"
	I0916 10:58:11.585697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.15µs"
	I0916 10:58:11.928987       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:58:11.961669       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:58:11.961701       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:58:37.849818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-026168-m02"
	I0916 10:58:46.080353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.621905ms"
	I0916 10:58:46.080425       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.624µs"
	I0916 10:58:47.176633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.889749ms"
	I0916 10:58:47.176735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.366µs"
	I0916 10:58:48.915547       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.610645ms"
	I0916 10:58:48.915702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="87.716µs"
	I0916 10:58:51.322657       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2jtzj"
	I0916 10:58:51.344475       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2jtzj"
	I0916 10:58:51.344520       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-g86bs"
	I0916 10:58:51.363650       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-g86bs"
	
	
	==> kube-proxy [a428b757c65a996e3c54460b83b272ad5f550a67305acc0db875022e9458247a] <==
	I0916 10:58:09.014850       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:58:09.166160       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:58:09.166229       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:58:09.197981       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:58:09.198042       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:58:09.200479       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:58:09.200964       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:58:09.201035       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:58:09.202238       1 config.go:199] "Starting service config controller"
	I0916 10:58:09.202308       1 config.go:328] "Starting node config controller"
	I0916 10:58:09.202323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:58:09.202236       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:58:09.202433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:58:09.202377       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:58:09.302889       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:58:09.302946       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:58:09.302911       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5943f1020f1d40d4b090d6aff7969c2396dcc3b95117cb10b745bb8dbd111ff8] <==
	I0916 10:58:05.700962       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:58:07.957655       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:58:07.957685       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:58:07.957694       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:58:07.957700       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:58:08.009253       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:58:08.009407       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:58:08.012378       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:58:08.012789       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:58:08.012834       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:58:08.012857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:58:08.113041       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:58:05 multinode-026168 kubelet[814]: W0916 10:58:05.213818     814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Sep 16 10:58:05 multinode-026168 kubelet[814]: E0916 10:58:05.213938     814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.67.2:8443: connect: connection refused" logger="UnhandledError"
	Sep 16 10:58:05 multinode-026168 kubelet[814]: I0916 10:58:05.919538     814 kubelet_node_status.go:72] "Attempting to register node" node="multinode-026168"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.110245     814 kubelet_node_status.go:111] "Node was previously registered" node="multinode-026168"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.110370     814 kubelet_node_status.go:75] "Successfully registered node" node="multinode-026168"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.110413     814 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.111240     814 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.356430     814 apiserver.go:52] "Watching apiserver"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.458049     814 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.499141     814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9e993dc5-3e51-407a-96f0-81c74274fb7c-cni-cfg\") pod \"kindnet-zv2p5\" (UID: \"9e993dc5-3e51-407a-96f0-81c74274fb7c\") " pod="kube-system/kindnet-zv2p5"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.499248     814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7-tmp\") pod \"storage-provisioner\" (UID: \"ec6d725d-ce1e-43d8-bc53-ebaec0ea9dc7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.499306     814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42162ba1-cb61-4a95-acc5-5c4c5f3ead8c-lib-modules\") pod \"kube-proxy-6p6vt\" (UID: \"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c\") " pod="kube-system/kube-proxy-6p6vt"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.499327     814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e993dc5-3e51-407a-96f0-81c74274fb7c-xtables-lock\") pod \"kindnet-zv2p5\" (UID: \"9e993dc5-3e51-407a-96f0-81c74274fb7c\") " pod="kube-system/kindnet-zv2p5"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.499359     814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42162ba1-cb61-4a95-acc5-5c4c5f3ead8c-xtables-lock\") pod \"kube-proxy-6p6vt\" (UID: \"42162ba1-cb61-4a95-acc5-5c4c5f3ead8c\") " pod="kube-system/kube-proxy-6p6vt"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.499387     814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e993dc5-3e51-407a-96f0-81c74274fb7c-lib-modules\") pod \"kindnet-zv2p5\" (UID: \"9e993dc5-3e51-407a-96f0-81c74274fb7c\") " pod="kube-system/kindnet-zv2p5"
	Sep 16 10:58:08 multinode-026168 kubelet[814]: I0916 10:58:08.506251     814 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:58:14 multinode-026168 kubelet[814]: E0916 10:58:14.413896     814 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484294413734391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:58:14 multinode-026168 kubelet[814]: E0916 10:58:14.413937     814 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484294413734391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:58:24 multinode-026168 kubelet[814]: E0916 10:58:24.415057     814 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484304414858909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:58:24 multinode-026168 kubelet[814]: E0916 10:58:24.415098     814 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484304414858909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:58:34 multinode-026168 kubelet[814]: E0916 10:58:34.416226     814 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484314416020136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:58:34 multinode-026168 kubelet[814]: E0916 10:58:34.416267     814 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484314416020136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:58:39 multinode-026168 kubelet[814]: I0916 10:58:39.489880     814 scope.go:117] "RemoveContainer" containerID="c7c445df782de9e94725cd6fa49678165511b3385a318d919bf95279e91e1b0a"
	Sep 16 10:58:44 multinode-026168 kubelet[814]: E0916 10:58:44.417200     814 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484324417042963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:58:44 multinode-026168 kubelet[814]: E0916 10:58:44.417229     814 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484324417042963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135007,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-026168 -n multinode-026168
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (593.131µs)
helpers_test.go:263: kubectl --context multinode-026168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/RestartMultiNode (55.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (316.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-749637 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-749637 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.029071534s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-749637
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-749637: (1.203000632s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-749637 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-749637 status --format={{.Host}}: exit status 7 (62.825553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-749637 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-749637 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.198054902s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-749637 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-749637 version --output=json: fork/exec /usr/local/bin/kubectl: exec format error (596.906µs)
version_upgrade_test.go:250: error running kubectl: fork/exec /usr/local/bin/kubectl: exec format error
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-16 11:09:36.635912906 +0000 UTC m=+2834.482694329
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-749637
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-749637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "531b1900128ef36dfd4679384605b15da7f708dc4df5f8c33578867d95c809c0",
	        "Created": "2024-09-16T11:04:31.027842986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 223608,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:05:11.909985906Z",
	            "FinishedAt": "2024-09-16T11:05:10.602265738Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/531b1900128ef36dfd4679384605b15da7f708dc4df5f8c33578867d95c809c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/531b1900128ef36dfd4679384605b15da7f708dc4df5f8c33578867d95c809c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/531b1900128ef36dfd4679384605b15da7f708dc4df5f8c33578867d95c809c0/hosts",
	        "LogPath": "/var/lib/docker/containers/531b1900128ef36dfd4679384605b15da7f708dc4df5f8c33578867d95c809c0/531b1900128ef36dfd4679384605b15da7f708dc4df5f8c33578867d95c809c0-json.log",
	        "Name": "/kubernetes-upgrade-749637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-749637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-749637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dd52136e9c8542fc70b8049e6617700558fd0284f530149f15a67a4368f8d2c3-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd52136e9c8542fc70b8049e6617700558fd0284f530149f15a67a4368f8d2c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd52136e9c8542fc70b8049e6617700558fd0284f530149f15a67a4368f8d2c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd52136e9c8542fc70b8049e6617700558fd0284f530149f15a67a4368f8d2c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-749637",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-749637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-749637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-749637",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-749637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eaa850c66cdb75f712ec4e13c2020315ce79c8772602237f7e97befe087a6e71",
	            "SandboxKey": "/var/run/docker/netns/eaa850c66cdb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-749637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a8e9745219401af858fdf3393b51897c3d76126c229d3b52cdb001cd259bb408",
	                    "EndpointID": "98fe3e3f0890bf0c6c8a2cb0369379ae2198a01f13a6e041114976235fa50997",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-749637",
	                        "531b1900128e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-749637 -n kubernetes-upgrade-749637
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-749637 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-749637 logs -n 25: (1.039637089s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p missing-upgrade-922846             | missing-upgrade-922846    | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-911411             | stopped-upgrade-911411    | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	| start   | -p cert-expiration-997173             | cert-expiration-997173    | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-802794             | running-upgrade-802794    | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-802794             | running-upgrade-802794    | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	| delete  | -p missing-upgrade-922846             | missing-upgrade-922846    | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	| start   | -p force-systemd-flag-587021          | force-systemd-flag-587021 | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-259137 --memory=2048         | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:07 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-587021 ssh cat     | force-systemd-flag-587021 | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-587021          | force-systemd-flag-587021 | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:06 UTC |
	| start   | -p cert-options-904767                | cert-options-904767       | jenkins | v1.34.0 | 16 Sep 24 11:06 UTC | 16 Sep 24 11:07 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-259137                       | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-904767 ssh               | cert-options-904767       | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-904767 -- sudo        | cert-options-904767       | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-904767                | cert-options-904767       | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p auto-838467 --memory=3072          | auto-838467               | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-259137                       | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-259137                       | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-259137                       | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-259137                       | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| ssh     | -p auto-838467 pgrep -a               | auto-838467               | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| delete  | -p pause-259137                       | pause-259137              | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p kindnet-838467                     | kindnet-838467            | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-838467 pgrep -a            | kindnet-838467            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-997173             | cert-expiration-997173    | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:09:26
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:09:26.344835  268041 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:09:26.344923  268041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:26.344926  268041 out.go:358] Setting ErrFile to fd 2...
	I0916 11:09:26.344930  268041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:26.345109  268041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:09:26.345698  268041 out.go:352] Setting JSON to false
	I0916 11:09:26.346911  268041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3106,"bootTime":1726481860,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:09:26.346998  268041 start.go:139] virtualization: kvm guest
	I0916 11:09:26.349525  268041 out.go:177] * [cert-expiration-997173] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:09:26.350827  268041 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:09:26.350869  268041 notify.go:220] Checking for updates...
	I0916 11:09:26.353258  268041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:09:26.354567  268041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:09:26.355771  268041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:09:26.356870  268041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:09:26.374194  268041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:09:26.376108  268041 config.go:182] Loaded profile config "cert-expiration-997173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:09:26.376587  268041 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:09:26.400587  268041 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:09:26.400690  268041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:26.454549  268041 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:26.443508265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:26.454633  268041 docker.go:318] overlay module found
	I0916 11:09:26.457028  268041 out.go:177] * Using the docker driver based on existing profile
	I0916 11:09:21.503696  223317 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:09:21.504153  223317 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0916 11:09:21.504212  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:21.504265  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:21.537793  223317 cri.go:89] found id: "68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96"
	I0916 11:09:21.537820  223317 cri.go:89] found id: ""
	I0916 11:09:21.537830  223317 logs.go:276] 1 containers: [68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96]
	I0916 11:09:21.537890  223317 ssh_runner.go:195] Run: which crictl
	I0916 11:09:21.541449  223317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:09:21.541529  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:21.574521  223317 cri.go:89] found id: ""
	I0916 11:09:21.574546  223317 logs.go:276] 0 containers: []
	W0916 11:09:21.574558  223317 logs.go:278] No container was found matching "etcd"
	I0916 11:09:21.574571  223317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:09:21.574635  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:21.607530  223317 cri.go:89] found id: ""
	I0916 11:09:21.607561  223317 logs.go:276] 0 containers: []
	W0916 11:09:21.607572  223317 logs.go:278] No container was found matching "coredns"
	I0916 11:09:21.607580  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:21.607651  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:21.640362  223317 cri.go:89] found id: ""
	I0916 11:09:21.640392  223317 logs.go:276] 0 containers: []
	W0916 11:09:21.640400  223317 logs.go:278] No container was found matching "kube-scheduler"
	I0916 11:09:21.640407  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:21.640457  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:21.674171  223317 cri.go:89] found id: ""
	I0916 11:09:21.674193  223317 logs.go:276] 0 containers: []
	W0916 11:09:21.674202  223317 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:21.674207  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:21.674262  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:21.709174  223317 cri.go:89] found id: ""
	I0916 11:09:21.709196  223317 logs.go:276] 0 containers: []
	W0916 11:09:21.709205  223317 logs.go:278] No container was found matching "kube-controller-manager"
	I0916 11:09:21.709210  223317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:21.709263  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:21.743644  223317 cri.go:89] found id: ""
	I0916 11:09:21.743679  223317 logs.go:276] 0 containers: []
	W0916 11:09:21.743691  223317 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:21.743700  223317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:21.743754  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:21.776634  223317 cri.go:89] found id: ""
	I0916 11:09:21.776663  223317 logs.go:276] 0 containers: []
	W0916 11:09:21.776677  223317 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:21.776692  223317 logs.go:123] Gathering logs for kube-apiserver [68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96] ...
	I0916 11:09:21.776708  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96"
	I0916 11:09:21.812122  223317 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:09:21.812150  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:09:21.840752  223317 logs.go:123] Gathering logs for container status ...
	I0916 11:09:21.840788  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:21.879282  223317 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:21.879309  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:21.979001  223317 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:21.979040  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:21.998517  223317 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:21.998547  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:22.059413  223317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:24.559730  223317 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:09:24.560199  223317 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0916 11:09:24.560249  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:24.560302  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:24.595697  223317 cri.go:89] found id: "68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96"
	I0916 11:09:24.595725  223317 cri.go:89] found id: ""
	I0916 11:09:24.595733  223317 logs.go:276] 1 containers: [68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96]
	I0916 11:09:24.595789  223317 ssh_runner.go:195] Run: which crictl
	I0916 11:09:24.599584  223317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:09:24.599653  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:24.634053  223317 cri.go:89] found id: ""
	I0916 11:09:24.634084  223317 logs.go:276] 0 containers: []
	W0916 11:09:24.634095  223317 logs.go:278] No container was found matching "etcd"
	I0916 11:09:24.634102  223317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:09:24.634155  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:24.668428  223317 cri.go:89] found id: ""
	I0916 11:09:24.668456  223317 logs.go:276] 0 containers: []
	W0916 11:09:24.668464  223317 logs.go:278] No container was found matching "coredns"
	I0916 11:09:24.668471  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:24.668522  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:24.702690  223317 cri.go:89] found id: ""
	I0916 11:09:24.702720  223317 logs.go:276] 0 containers: []
	W0916 11:09:24.702732  223317 logs.go:278] No container was found matching "kube-scheduler"
	I0916 11:09:24.702740  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:24.702803  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:24.738362  223317 cri.go:89] found id: ""
	I0916 11:09:24.738396  223317 logs.go:276] 0 containers: []
	W0916 11:09:24.738408  223317 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:24.738416  223317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:24.738465  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:24.772422  223317 cri.go:89] found id: ""
	I0916 11:09:24.772445  223317 logs.go:276] 0 containers: []
	W0916 11:09:24.772454  223317 logs.go:278] No container was found matching "kube-controller-manager"
	I0916 11:09:24.772460  223317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:24.772511  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:24.806140  223317 cri.go:89] found id: ""
	I0916 11:09:24.806167  223317 logs.go:276] 0 containers: []
	W0916 11:09:24.806177  223317 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:24.806184  223317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:24.806251  223317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:24.840553  223317 cri.go:89] found id: ""
	I0916 11:09:24.840585  223317 logs.go:276] 0 containers: []
	W0916 11:09:24.840596  223317 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:24.840611  223317 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:09:24.840630  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:09:24.869526  223317 logs.go:123] Gathering logs for container status ...
	I0916 11:09:24.869560  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:24.907796  223317 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:24.907828  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:25.016571  223317 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:25.016610  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:25.037894  223317 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:25.037940  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:25.097952  223317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:25.097970  223317 logs.go:123] Gathering logs for kube-apiserver [68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96] ...
	I0916 11:09:25.097984  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68c8b7807ce557f5ad2ec0557bbc81c6457de64c4bf710976d447059df057e96"
	I0916 11:09:26.458096  268041 start.go:297] selected driver: docker
	I0916 11:09:26.458105  268041 start.go:901] validating driver "docker" against &{Name:cert-expiration-997173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-997173 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:26.458211  268041 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:09:26.459380  268041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:26.517008  268041 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:26.507226718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:26.517287  268041 cni.go:84] Creating CNI manager for ""
	I0916 11:09:26.517314  268041 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:09:26.517379  268041 start.go:340] cluster config:
	{Name:cert-expiration-997173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-997173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:26.519173  268041 out.go:177] * Starting "cert-expiration-997173" primary control-plane node in "cert-expiration-997173" cluster
	I0916 11:09:26.520279  268041 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:09:26.521201  268041 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:09:26.522270  268041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:09:26.522298  268041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:09:26.522305  268041 cache.go:56] Caching tarball of preloaded images
	I0916 11:09:26.522377  268041 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:09:26.522369  268041 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:09:26.522382  268041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:09:26.522462  268041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/config.json ...
	W0916 11:09:26.545412  268041 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:09:26.545437  268041 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:09:26.545519  268041 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:09:26.545533  268041 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:09:26.545538  268041 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:09:26.545546  268041 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:09:26.545552  268041 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:09:26.609521  268041 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:09:26.609541  268041 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:09:26.609576  268041 start.go:360] acquireMachinesLock for cert-expiration-997173: {Name:mk3a2d42162f3e1862e3fe348c13cc109174b466 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:26.609639  268041 start.go:364] duration metric: took 45.793µs to acquireMachinesLock for "cert-expiration-997173"
	I0916 11:09:26.609654  268041 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:09:26.609658  268041 fix.go:54] fixHost starting: 
	I0916 11:09:26.609881  268041 cli_runner.go:164] Run: docker container inspect cert-expiration-997173 --format={{.State.Status}}
	I0916 11:09:26.628260  268041 fix.go:112] recreateIfNeeded on cert-expiration-997173: state=Running err=<nil>
	W0916 11:09:26.628286  268041 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:09:26.630482  268041 out.go:177] * Updating the running docker "cert-expiration-997173" container ...
	I0916 11:09:27.635906  223317 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:09:27.636311  223317 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0916 11:09:27.636366  223317 kubeadm.go:597] duration metric: took 4m4.03645024s to restartPrimaryControlPlane
	W0916 11:09:27.636415  223317 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 11:09:27.636441  223317 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0916 11:09:28.175786  223317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:09:28.187295  223317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:09:28.196160  223317 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:09:28.196235  223317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:09:28.204768  223317 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:09:28.204791  223317 kubeadm.go:157] found existing configuration files:
	
	I0916 11:09:28.204849  223317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:09:28.213729  223317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:09:28.213826  223317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:09:28.222075  223317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:09:28.230506  223317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:09:28.230571  223317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:09:28.239306  223317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:09:28.247696  223317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:09:28.247767  223317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:09:28.255873  223317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:09:28.263791  223317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:09:28.263869  223317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:09:28.271680  223317 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:09:28.307424  223317 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:09:28.307492  223317 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:09:28.324040  223317 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:09:28.324111  223317 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:09:28.324141  223317 kubeadm.go:310] OS: Linux
	I0916 11:09:28.324210  223317 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:09:28.324274  223317 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:09:28.324331  223317 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:09:28.324391  223317 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:09:28.324461  223317 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:09:28.324530  223317 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:09:28.324592  223317 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:09:28.324671  223317 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:09:28.324738  223317 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:09:28.376771  223317 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:09:28.376916  223317 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:09:28.377062  223317 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:09:28.383621  223317 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:09:28.386672  223317 out.go:235]   - Generating certificates and keys ...
	I0916 11:09:28.386761  223317 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:09:28.386840  223317 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:09:28.386922  223317 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 11:09:28.386981  223317 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 11:09:28.387064  223317 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 11:09:28.387142  223317 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 11:09:28.387217  223317 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 11:09:28.387305  223317 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 11:09:28.387373  223317 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 11:09:28.387436  223317 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 11:09:28.387472  223317 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 11:09:28.387533  223317 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:09:28.453605  223317 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:09:28.756812  223317 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:09:28.836520  223317 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:09:29.015435  223317 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:09:29.073295  223317 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:09:29.073892  223317 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:09:29.076175  223317 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:09:26.631813  268041 machine.go:93] provisionDockerMachine start ...
	I0916 11:09:26.631893  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:26.655082  268041 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:26.655283  268041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0916 11:09:26.655289  268041 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:09:26.794246  268041 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-997173
	
	I0916 11:09:26.794265  268041 ubuntu.go:169] provisioning hostname "cert-expiration-997173"
	I0916 11:09:26.794316  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:26.812758  268041 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:26.812975  268041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0916 11:09:26.812987  268041 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-997173 && echo "cert-expiration-997173" | sudo tee /etc/hostname
	I0916 11:09:26.961273  268041 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-997173
	
	I0916 11:09:26.961343  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:26.979515  268041 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:26.979691  268041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0916 11:09:26.979703  268041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-997173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-997173/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-997173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:09:27.113564  268041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:09:27.113583  268041 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:09:27.113606  268041 ubuntu.go:177] setting up certificates
	I0916 11:09:27.113617  268041 provision.go:84] configureAuth start
	I0916 11:09:27.113674  268041 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-997173
	I0916 11:09:27.131295  268041 provision.go:143] copyHostCerts
	I0916 11:09:27.131343  268041 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:09:27.131350  268041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:09:27.131415  268041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:09:27.131534  268041 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:09:27.131539  268041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:09:27.131563  268041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:09:27.131622  268041 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:09:27.131625  268041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:09:27.131649  268041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:09:27.131693  268041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-997173 san=[127.0.0.1 192.168.103.2 cert-expiration-997173 localhost minikube]
	I0916 11:09:27.322898  268041 provision.go:177] copyRemoteCerts
	I0916 11:09:27.322950  268041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:09:27.322981  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:27.342049  268041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/cert-expiration-997173/id_rsa Username:docker}
	I0916 11:09:27.438919  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:09:27.462218  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:09:27.484968  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:09:27.507823  268041 provision.go:87] duration metric: took 394.194174ms to configureAuth
	I0916 11:09:27.507847  268041 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:09:27.508031  268041 config.go:182] Loaded profile config "cert-expiration-997173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:09:27.508118  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:27.526988  268041 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:27.527197  268041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0916 11:09:27.527214  268041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:09:29.078270  223317 out.go:235]   - Booting up control plane ...
	I0916 11:09:29.078383  223317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:09:29.078468  223317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:09:29.078577  223317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:09:29.087145  223317 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:09:29.093307  223317 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:09:29.093409  223317 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:09:29.181178  223317 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:09:29.181370  223317 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:09:29.682720  223317 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.725085ms
	I0916 11:09:29.682818  223317 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:09:34.184810  223317 kubeadm.go:310] [api-check] The API server is healthy after 4.502027429s
	I0916 11:09:34.197376  223317 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:09:34.208509  223317 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:09:34.231742  223317 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:09:34.232019  223317 kubeadm.go:310] [mark-control-plane] Marking the node kubernetes-upgrade-749637 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:09:34.241821  223317 kubeadm.go:310] [bootstrap-token] Using token: 3axobz.nupygvx56lmmluwg
	I0916 11:09:32.912310  268041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:09:32.912326  268041 machine.go:96] duration metric: took 6.280502919s to provisionDockerMachine
	I0916 11:09:32.912335  268041 start.go:293] postStartSetup for "cert-expiration-997173" (driver="docker")
	I0916 11:09:32.912345  268041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:09:32.912409  268041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:09:32.912449  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:32.930933  268041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/cert-expiration-997173/id_rsa Username:docker}
	I0916 11:09:33.026573  268041 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:09:33.030050  268041 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:09:33.030085  268041 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:09:33.030093  268041 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:09:33.030098  268041 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:09:33.030107  268041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:09:33.030160  268041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:09:33.030239  268041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:09:33.030326  268041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:09:33.038659  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:09:33.063135  268041 start.go:296] duration metric: took 150.786633ms for postStartSetup
	I0916 11:09:33.063216  268041 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:09:33.063248  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:33.080602  268041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/cert-expiration-997173/id_rsa Username:docker}
	I0916 11:09:33.174543  268041 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:09:33.179411  268041 fix.go:56] duration metric: took 6.569745742s for fixHost
	I0916 11:09:33.179427  268041 start.go:83] releasing machines lock for "cert-expiration-997173", held for 6.56978264s
	I0916 11:09:33.179487  268041 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-997173
	I0916 11:09:33.197462  268041 ssh_runner.go:195] Run: cat /version.json
	I0916 11:09:33.197497  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:33.197572  268041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:09:33.197626  268041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-997173
	I0916 11:09:33.216330  268041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/cert-expiration-997173/id_rsa Username:docker}
	I0916 11:09:33.216529  268041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/cert-expiration-997173/id_rsa Username:docker}
	I0916 11:09:33.309444  268041 ssh_runner.go:195] Run: systemctl --version
	I0916 11:09:33.383258  268041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:09:33.522760  268041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:09:33.527204  268041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:09:33.535929  268041 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:09:33.535999  268041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:09:33.544442  268041 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:09:33.544460  268041 start.go:495] detecting cgroup driver to use...
	I0916 11:09:33.544491  268041 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:09:33.544535  268041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:09:33.559059  268041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:09:33.572532  268041 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:09:33.572585  268041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:09:33.586848  268041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:09:33.599691  268041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:09:33.741676  268041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:09:33.875756  268041 docker.go:233] disabling docker service ...
	I0916 11:09:33.875805  268041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:09:33.890145  268041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:09:33.903950  268041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:09:34.035019  268041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:09:34.160213  268041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:09:34.171855  268041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:09:34.187925  268041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:09:34.187975  268041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:09:34.199320  268041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:09:34.199363  268041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:09:34.211566  268041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:09:34.223115  268041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:09:34.233889  268041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:09:34.244061  268041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:09:34.255814  268041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:09:34.266610  268041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:09:34.277422  268041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:09:34.286252  268041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:09:34.294413  268041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:34.398902  268041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:09:34.519143  268041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:09:34.519202  268041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:09:34.522733  268041 start.go:563] Will wait 60s for crictl version
	I0916 11:09:34.522772  268041 ssh_runner.go:195] Run: which crictl
	I0916 11:09:34.525753  268041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:09:34.559198  268041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:09:34.559260  268041 ssh_runner.go:195] Run: crio --version
	I0916 11:09:34.595130  268041 ssh_runner.go:195] Run: crio --version
	I0916 11:09:34.641928  268041 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:09:34.243555  223317 out.go:235]   - Configuring RBAC rules ...
	I0916 11:09:34.243698  223317 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:09:34.247742  223317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:09:34.255763  223317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:09:34.258895  223317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:09:34.261714  223317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:09:34.265475  223317 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:09:34.591596  223317 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:09:35.018125  223317 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:09:35.590792  223317 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:09:35.591883  223317 kubeadm.go:310] 
	I0916 11:09:35.591964  223317 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:09:35.591977  223317 kubeadm.go:310] 
	I0916 11:09:35.592109  223317 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:09:35.592131  223317 kubeadm.go:310] 
	I0916 11:09:35.592161  223317 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:09:35.592248  223317 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:09:35.592319  223317 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:09:35.592331  223317 kubeadm.go:310] 
	I0916 11:09:35.592404  223317 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:09:35.592417  223317 kubeadm.go:310] 
	I0916 11:09:35.592498  223317 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:09:35.592506  223317 kubeadm.go:310] 
	I0916 11:09:35.592593  223317 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:09:35.592673  223317 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:09:35.592738  223317 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:09:35.592744  223317 kubeadm.go:310] 
	I0916 11:09:35.592846  223317 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:09:35.592967  223317 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:09:35.592981  223317 kubeadm.go:310] 
	I0916 11:09:35.593105  223317 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3axobz.nupygvx56lmmluwg \
	I0916 11:09:35.593252  223317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:09:35.593287  223317 kubeadm.go:310] 	--control-plane 
	I0916 11:09:35.593297  223317 kubeadm.go:310] 
	I0916 11:09:35.593520  223317 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:09:35.593537  223317 kubeadm.go:310] 
	I0916 11:09:35.593623  223317 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3axobz.nupygvx56lmmluwg \
	I0916 11:09:35.593749  223317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:09:35.596065  223317 kubeadm.go:310] W0916 11:09:28.304959    7440 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:35.596465  223317 kubeadm.go:310] W0916 11:09:28.305593    7440 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:35.596729  223317 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:09:35.596893  223317 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:09:35.596919  223317 cni.go:84] Creating CNI manager for ""
	I0916 11:09:35.596930  223317 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:09:35.599151  223317 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:09:35.600722  223317 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:09:35.604309  223317 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:09:35.604325  223317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:09:35.622967  223317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:09:35.861726  223317 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:09:35.861826  223317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:35.861905  223317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-749637 minikube.k8s.io/updated_at=2024_09_16T11_09_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=kubernetes-upgrade-749637 minikube.k8s.io/primary=true
	I0916 11:09:36.022944  223317 ops.go:34] apiserver oom_adj: -16
	I0916 11:09:36.023010  223317 kubeadm.go:1113] duration metric: took 161.253313ms to wait for elevateKubeSystemPrivileges
	I0916 11:09:36.023029  223317 kubeadm.go:394] duration metric: took 4m12.469219474s to StartCluster
	I0916 11:09:36.023052  223317 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:36.023141  223317 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:09:36.025269  223317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:36.025565  223317 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:09:36.025715  223317 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:36.025826  223317 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-749637"
	I0916 11:09:36.025852  223317 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-749637"
	W0916 11:09:36.025861  223317 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:09:36.025866  223317 config.go:182] Loaded profile config "kubernetes-upgrade-749637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:09:36.025894  223317 host.go:66] Checking if "kubernetes-upgrade-749637" exists ...
	I0916 11:09:36.025920  223317 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-749637"
	I0916 11:09:36.025936  223317 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-749637"
	I0916 11:09:36.026234  223317 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-749637 --format={{.State.Status}}
	I0916 11:09:36.026421  223317 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-749637 --format={{.State.Status}}
	I0916 11:09:36.027361  223317 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:36.028578  223317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:36.050969  223317 kapi.go:59] client config for kubernetes-upgrade-749637: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kubernetes-upgrade-749637/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kubernetes-upgrade-749637/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:09:36.051313  223317 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-749637"
	W0916 11:09:36.051330  223317 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:09:36.051361  223317 host.go:66] Checking if "kubernetes-upgrade-749637" exists ...
	I0916 11:09:36.051805  223317 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-749637 --format={{.State.Status}}
	I0916 11:09:36.052927  223317 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:34.643274  268041 cli_runner.go:164] Run: docker network inspect cert-expiration-997173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:09:34.665014  268041 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:09:34.669085  268041 kubeadm.go:883] updating cluster {Name:cert-expiration-997173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-997173 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:09:34.669177  268041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:09:34.669217  268041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:09:34.712600  268041 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:09:34.712613  268041 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:09:34.712672  268041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:09:34.756360  268041 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:09:34.756372  268041 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:09:34.756379  268041 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 11:09:34.756478  268041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-997173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-997173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:09:34.756531  268041 ssh_runner.go:195] Run: crio config
	I0916 11:09:34.800587  268041 cni.go:84] Creating CNI manager for ""
	I0916 11:09:34.800600  268041 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:09:34.800610  268041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:09:34.800634  268041 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-997173 NodeName:cert-expiration-997173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:09:34.800794  268041 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-997173"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:09:34.800873  268041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:09:34.811162  268041 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:09:34.811224  268041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:09:34.820643  268041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0916 11:09:34.839552  268041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:09:34.859868  268041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0916 11:09:34.881316  268041 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:09:34.884737  268041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:35.005313  268041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:35.020655  268041 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173 for IP: 192.168.103.2
	I0916 11:09:35.020672  268041 certs.go:194] generating shared ca certs ...
	I0916 11:09:35.020692  268041 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:35.020842  268041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:09:35.020882  268041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:09:35.020888  268041 certs.go:256] generating profile certs ...
	W0916 11:09:35.021033  268041 out.go:270] ! Certificate client.crt has expired. Generating a new one...
	I0916 11:09:35.021050  268041 certs.go:624] cert expired /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/client.crt: expiration: 2024-09-16 11:09:12 +0000 UTC, now: 2024-09-16 11:09:35.02104618 +0000 UTC m=+8.714628114
	I0916 11:09:35.021233  268041 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/client.key
	I0916 11:09:35.021283  268041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/client.crt with IP's: []
	I0916 11:09:35.135052  268041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/client.crt ...
	I0916 11:09:35.135066  268041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/client.crt: {Name:mkf64ae0d6a90d313fe3ce36a50d9ce50e3144c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:35.135214  268041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/client.key ...
	I0916 11:09:35.135222  268041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/client.key: {Name:mk980482409e7cd04056664db82b14d6fd0933d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:09:35.135389  268041 out.go:270] ! Certificate apiserver.crt.fb14dcf8 has expired. Generating a new one...
	I0916 11:09:35.135404  268041 certs.go:624] cert expired /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.crt.fb14dcf8: expiration: 2024-09-16 11:09:12 +0000 UTC, now: 2024-09-16 11:09:35.135400128 +0000 UTC m=+8.828982061
	I0916 11:09:35.135468  268041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.key.fb14dcf8
	I0916 11:09:35.135478  268041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.crt.fb14dcf8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:09:35.267961  268041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.crt.fb14dcf8 ...
	I0916 11:09:35.267976  268041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.crt.fb14dcf8: {Name:mkab4f629c1814f375ba6c80e80082c53120be59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:35.268109  268041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.key.fb14dcf8 ...
	I0916 11:09:35.268116  268041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.key.fb14dcf8: {Name:mk07981baff1c792538b3ab76d5b56d586bd53ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:35.268172  268041 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.crt.fb14dcf8 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.crt
	I0916 11:09:35.268300  268041 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.key.fb14dcf8 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.key
	W0916 11:09:35.268491  268041 out.go:270] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0916 11:09:35.268508  268041 certs.go:624] cert expired /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.crt: expiration: 2024-09-16 11:09:12 +0000 UTC, now: 2024-09-16 11:09:35.268502618 +0000 UTC m=+8.962084553
	I0916 11:09:35.268567  268041 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.key
	I0916 11:09:35.268585  268041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.crt with IP's: []
	I0916 11:09:35.493796  268041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.crt ...
	I0916 11:09:35.493812  268041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.crt: {Name:mk5ea6a52be88b014fc06537d39a157039e56c52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:35.493973  268041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.key ...
	I0916 11:09:35.493980  268041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.key: {Name:mk8f31f3d5323d4c6e4d4762331b62d2f99f50c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:35.494133  268041 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:09:35.494166  268041 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:09:35.494173  268041 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:09:35.494191  268041 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:09:35.494224  268041 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:09:35.494241  268041 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:09:35.494272  268041 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:09:35.494891  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:09:35.519689  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:09:35.544510  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:09:35.568259  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:09:35.591172  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:09:35.616961  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:09:35.640167  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:09:35.666376  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/cert-expiration-997173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:09:35.704416  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:09:35.735880  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:09:35.822664  268041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:09:36.014942  268041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:09:36.125563  268041 ssh_runner.go:195] Run: openssl version
	I0916 11:09:36.201718  268041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:09:36.232288  268041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:09:36.238981  268041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:09:36.239037  268041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:09:36.304698  268041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:09:36.322473  268041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:09:36.054526  223317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:36.054546  223317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:36.054606  223317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-749637
	I0916 11:09:36.077826  223317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/kubernetes-upgrade-749637/id_rsa Username:docker}
	I0916 11:09:36.083497  223317 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:36.083521  223317 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:36.083585  223317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-749637
	I0916 11:09:36.117377  223317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/kubernetes-upgrade-749637/id_rsa Username:docker}
	I0916 11:09:36.146100  223317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:36.159086  223317 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:09:36.159155  223317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:09:36.170228  223317 api_server.go:72] duration metric: took 144.616567ms to wait for apiserver process to appear ...
	I0916 11:09:36.170252  223317 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:09:36.170272  223317 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:09:36.175382  223317 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:09:36.182733  223317 api_server.go:141] control plane version: v1.31.1
	I0916 11:09:36.182772  223317 api_server.go:131] duration metric: took 12.512825ms to wait for apiserver health ...
	I0916 11:09:36.182782  223317 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:09:36.182895  223317 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 11:09:36.182916  223317 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 11:09:36.190397  223317 system_pods.go:59] 4 kube-system pods found
	I0916 11:09:36.190431  223317 system_pods.go:61] "etcd-kubernetes-upgrade-749637" [356b34ed-85eb-4768-b662-1ba2d8f1e4f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 11:09:36.190441  223317 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-749637" [6142d2df-aad6-4539-9c40-7021da5f7898] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 11:09:36.190449  223317 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-749637" [bcac72c3-9620-46bc-a80a-54912e387620] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 11:09:36.190455  223317 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-749637" [b2656455-931b-4d57-924a-99d994704812] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 11:09:36.190462  223317 system_pods.go:74] duration metric: took 7.673135ms to wait for pod list to return data ...
	I0916 11:09:36.190471  223317 kubeadm.go:582] duration metric: took 164.86528ms to wait for: map[apiserver:true system_pods:true]
	I0916 11:09:36.190481  223317 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:09:36.192341  223317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:36.195593  223317 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:09:36.195629  223317 node_conditions.go:123] node cpu capacity is 8
	I0916 11:09:36.195643  223317 node_conditions.go:105] duration metric: took 5.15791ms to run NodePressure ...
	I0916 11:09:36.195658  223317 start.go:241] waiting for startup goroutines ...
	I0916 11:09:36.256668  223317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:36.604862  223317 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:09:36.606773  223317 addons.go:510] duration metric: took 581.065522ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:09:36.606831  223317 start.go:246] waiting for cluster config update ...
	I0916 11:09:36.606847  223317 start.go:255] writing updated cluster config ...
	I0916 11:09:36.607150  223317 ssh_runner.go:195] Run: rm -f paused
	I0916 11:09:36.614922  223317 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-749637" cluster and "default" namespace by default
	E0916 11:09:36.616314  223317 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:09:36.398075  268041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:09:36.406001  268041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:09:36.406073  268041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:09:36.416049  268041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:09:36.498096  268041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:09:36.513077  268041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:36.518377  268041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:36.518427  268041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:36.526076  268041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:09:36.603415  268041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:09:36.608251  268041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:09:36.621781  268041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:09:36.629953  268041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:09:36.698891  268041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:09:36.710466  268041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:09:36.720411  268041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:09:36.731637  268041 kubeadm.go:392] StartCluster: {Name:cert-expiration-997173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-997173 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:36.731731  268041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:09:36.731808  268041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:09:36.839798  268041 cri.go:89] found id: "d8975288b029d778c48fc0db3538a03eba60bd95032a0ef13edcc1b910df53ff"
	I0916 11:09:36.839812  268041 cri.go:89] found id: "936f15644c4839f05a4b65664a285895b2b17c2383a569bd4b61d3e587ea3ffb"
	I0916 11:09:36.839816  268041 cri.go:89] found id: "c999c8ecdad5e9176d92a1de3513a3d178dafcba46096ac24478d1fd2fa7411f"
	I0916 11:09:36.839819  268041 cri.go:89] found id: "28c73b5d6532f8d45c2f07d6363785221db591d428f5b6b58605ab5c6dd56e64"
	I0916 11:09:36.839822  268041 cri.go:89] found id: "525f9506ca945c440f3a5a8633526e441bb62faddf984060457bc27d922958c9"
	I0916 11:09:36.839825  268041 cri.go:89] found id: "15b134810b77ab2a4d6a131a4c2bc2a4223f35ac502354fb60da4b493bc0328e"
	I0916 11:09:36.839829  268041 cri.go:89] found id: "04a2ce1c135b3cadc447d08fe13b01f572cf6bc4e99d56b8164554b3584d245b"
	I0916 11:09:36.839831  268041 cri.go:89] found id: "b2cc93bb5ccc5229eda19245a14edd8372ee5d43056b9bef4ec99fce6e6a6d3e"
	I0916 11:09:36.839834  268041 cri.go:89] found id: "e5cbe79ac64f8ab8ac1a5379d59f2ccb90ae142d73dd3e0e517cfaa973a9e678"
	I0916 11:09:36.839842  268041 cri.go:89] found id: "bc94f1b0be43901fcec2ae437a70c2d09c3df0dfdcd80d972683e6df5dc20c97"
	I0916 11:09:36.839844  268041 cri.go:89] found id: "fd61827c1bb0c44e91503fa5609ebaec70ec820e5181cb915d8f629f17587823"
	I0916 11:09:36.839847  268041 cri.go:89] found id: "0d51197c27265c7561b8c0714503ee88f5842b3e8e1cbcc52853987feeb84c3e"
	I0916 11:09:36.839850  268041 cri.go:89] found id: "81220dc49cd84d4570213a65df3f108cb87eb2693bb5eea2e9446a3cb19040d6"
	I0916 11:09:36.839853  268041 cri.go:89] found id: "6c84610c54784f71c1242c0b673e61cc60bf9090cfff27c3250f5e31953c7eb7"
	I0916 11:09:36.839860  268041 cri.go:89] found id: "4821234946ec50a7f35bb9336beb039e237b1aabfc8247ef25ea8393d30783e1"
	I0916 11:09:36.839868  268041 cri.go:89] found id: ""
	I0916 11:09:36.839915  268041 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 11:09:36.941217  268041 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"04a2ce1c135b3cadc447d08fe13b01f572cf6bc4e99d56b8164554b3584d245b","pid":2884,"status":"running","bundle":"/run/containers/storage/overlay-containers/04a2ce1c135b3cadc447d08fe13b01f572cf6bc4e99d56b8164554b3584d245b/userdata","rootfs":"/var/lib/containers/storage/overlay/bf139c53fd74c05be131b728cb0c761e192753b6437387551929314f1a5a5a0a/merged","created":"2024-09-16T11:09:35.822334699Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e80daca3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e80daca3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"04a2ce1c135b3cadc447d08fe13b01f572cf6bc4e99d56b8164554b3584d245b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.732783612Z","io.kubernetes.cri-o.Image":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri-o.ImageRef":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-p4292\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c48a8fc-3bbc-4ec6-ad42-f71607ed50df\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-p4292_7c48a8fc-3bbc-4ec6-ad42-f71607ed50df/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.c
ri-o.MountPoint":"/var/lib/containers/storage/overlay/bf139c53fd74c05be131b728cb0c761e192753b6437387551929314f1a5a5a0a/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-p4292_kube-system_7c48a8fc-3bbc-4ec6-ad42-f71607ed50df_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a3f65f9507815b1d78f91c6af7ea8b28fa8849288d7395460d109d8cb4d23901/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a3f65f9507815b1d78f91c6af7ea8b28fa8849288d7395460d109d8cb4d23901","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-p4292_kube-system_7c48a8fc-3bbc-4ec6-ad42-f71607ed50df_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"pro
pagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7c48a8fc-3bbc-4ec6-ad42-f71607ed50df/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c48a8fc-3bbc-4ec6-ad42-f71607ed50df/containers/kindnet-cni/bb62e2d4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c48a8fc-3bbc-4ec6-ad42-f71607ed50df/volumes/kubernetes.io~projected/kube-api-access-fv4h4\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-p4292","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c48a8fc-3bbc-4ec6-a
d42-f71607ed50df","kubernetes.io/config.seen":"2024-09-16T11:06:28.135148977Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0d51197c27265c7561b8c0714503ee88f5842b3e8e1cbcc52853987feeb84c3e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0d51197c27265c7561b8c0714503ee88f5842b3e8e1cbcc52853987feeb84c3e/userdata","rootfs":"/var/lib/containers/storage/overlay/855a14cef39cde1a14f702905467459a84f74a889ff3a3825ff9100f699fa30e/merged","created":"2024-09-16T11:06:17.812498385Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMe
ssagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0d51197c27265c7561b8c0714503ee88f5842b3e8e1cbcc52853987feeb84c3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:06:17.745129917Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"87d13c11a28a94471905c76161470236\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-cert-expiration-997173_87d13c11a28a94471905c76161470236/
kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/855a14cef39cde1a14f702905467459a84f74a889ff3a3825ff9100f699fa30e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-cert-expiration-997173_kube-system_87d13c11a28a94471905c76161470236_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/752b4d3ee7ad2bc67d621eb65f2b18b833156c6374cdd963f1bb6a647b3cdc79/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"752b4d3ee7ad2bc67d621eb65f2b18b833156c6374cdd963f1bb6a647b3cdc79","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-cert-expiration-997173_kube-system_87d13c11a28a94471905c76161470236_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/87d13c
11a28a94471905c76161470236/containers/kube-apiserver/f4cc6348\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/87d13c11a28a94471905c76161470236/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\"
:true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-cert-expiration-997173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"87d13c11a28a94471905c76161470236","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.103.2:8443","kubernetes.io/config.hash":"87d13c11a28a94471905c76161470236","kubernetes.io/config.seen":"2024-09-16T11:06:17.231004825Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15b134810b77ab2a4d6a131a4c2bc2a4223f35ac502354fb60da4b493bc0328e","pid":2900,"status":"running","bundle":"/run/containers/storage/overlay-containers/15b134810b77ab2a4d6a131a4c2bc2a4223f35ac502354fb60da4b493bc0328e/userdata","rootfs":"/var/lib/containers/storage/overlay/df8d619c50464081b5e11af02f78866c9d84489ff162e44b408e9a8f827914cd/merged","created":"2024-09-16T11:09:35.902877354Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.h
ash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15b134810b77ab2a4d6a131a4c2bc2a4223f35ac502354fb60da4b493bc0328e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.735902146Z","io.kubernetes.cri-o.Image":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a895
61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-f6mxh\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1ee793da-2cff-43cd-92d7-56628deec6f7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-f6mxh_1ee793da-2cff-43cd-92d7-56628deec6f7/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/df8d619c50464081b5e11af02f78866c9d84489ff162e44b408e9a8f827914cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-f6mxh_kube-system_1ee793da-2cff-43cd-92d7-56628deec6f7_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f5bae83408ad24187f4eddb82cd3da68ef5d78dd8ae4ba0fc42d2cbace4acf5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2f5bae83408ad24187f4eddb82cd3da68ef5d78dd8ae4ba0fc42d2cbace4acf5","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-f6mxh_
kube-system_1ee793da-2cff-43cd-92d7-56628deec6f7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-56628deec6f7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-56628deec6f7/containers/kube-proxy/6070d7fd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-5662
8deec6f7/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-56628deec6f7/volumes/kubernetes.io~projected/kube-api-access-wctqk\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-f6mxh","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1ee793da-2cff-43cd-92d7-56628deec6f7","kubernetes.io/config.seen":"2024-09-16T11:06:28.136066196Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28c73b5d6532f8d45c2f07d6363785221db591d428f5b6b58605ab5c6dd56e64","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/28c73b5d6532f8d45c2f07d6363785221db591d428f5b6b58605ab5c6dd56e64/userdata","rootfs":"/var/lib/containers/storage/overlay/ab74b94e577802a83ca607f532a73
57774b7765d3a8ca6fe6e32f36bcdbcd0a3/merged","created":"2024-09-16T11:09:36.019360756Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"28c73b5d6532f8d45c2f07d6363785221db591d428f5b6b58605ab5c6dd56e64","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.801793265Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"719598fc-15f8-41d6-a4ca-270ad65be19d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_719598fc-15f8-41d6-a4ca-270ad65be19d/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ab74b94e577802a83ca607f532a7357774b7765d3a8ca6fe6e32f36bcdbcd0a3/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_719598fc-15f8-41d6-a4ca-270ad65be19d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ec1f7820fdc08c10a2948
25303daa0545a196274217a0d2f8bb76e8c4e689169/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ec1f7820fdc08c10a294825303daa0545a196274217a0d2f8bb76e8c4e689169","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_719598fc-15f8-41d6-a4ca-270ad65be19d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/719598fc-15f8-41d6-a4ca-270ad65be19d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/719598fc-15f8-41d6-a4ca-270ad65be19d/containers/storage-provisioner/0b326f4c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/sec
rets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/719598fc-15f8-41d6-a4ca-270ad65be19d/volumes/kubernetes.io~projected/kube-api-access-28d6w\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"719598fc-15f8-41d6-a4ca-270ad65be19d","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccount
Name\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-09-16T11:07:10.534806601Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4821234946ec50a7f35bb9336beb039e237b1aabfc8247ef25ea8393d30783e1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4821234946ec50a7f35bb9336beb039e237b1aabfc8247ef25ea8393d30783e1/userdata","rootfs":"/var/lib/containers/storage/overlay/df49a4b737a461c883a74319490f891e22186b3c7a10ed0bf77236ddb6251b69/merged","created":"2024-09-16T11:06:17.809864763Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"
cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4821234946ec50a7f35bb9336beb039e237b1aabfc8247ef25ea8393d30783e1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:06:17.743025442Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"963e0991d0bafadafaba83e1b06e8626\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etc
d-cert-expiration-997173_963e0991d0bafadafaba83e1b06e8626/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/df49a4b737a461c883a74319490f891e22186b3c7a10ed0bf77236ddb6251b69/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-cert-expiration-997173_kube-system_963e0991d0bafadafaba83e1b06e8626_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/88e0bc927dbc84923682a1a5c789b752c550f98460c04863e5e279f40fc27461/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"88e0bc927dbc84923682a1a5c789b752c550f98460c04863e5e279f40fc27461","io.kubernetes.cri-o.SandboxName":"k8s_etcd-cert-expiration-997173_kube-system_963e0991d0bafadafaba83e1b06e8626_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/963e0991
d0bafadafaba83e1b06e8626/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/963e0991d0bafadafaba83e1b06e8626/containers/etcd/877b2628\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-cert-expiration-997173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"963e0991d0bafadafaba83e1b06e8626","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.103.2:2379","kubernetes.io/config.hash":"963e0991d0bafadafaba83e1b06e8626","kubernetes.io/config.seen":"2024-09-16T11:06:17.
231000653Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"525f9506ca945c440f3a5a8633526e441bb62faddf984060457bc27d922958c9","pid":2909,"status":"running","bundle":"/run/containers/storage/overlay-containers/525f9506ca945c440f3a5a8633526e441bb62faddf984060457bc27d922958c9/userdata","rootfs":"/var/lib/containers/storage/overlay/6341347f03f3a6f550ce2ff016fa8590b3efeacf6b85f98a0926bcd3a6237c4d/merged","created":"2024-09-16T11:09:35.835423886Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kuber
netes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"525f9506ca945c440f3a5a8633526e441bb62faddf984060457bc27d922958c9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.742079944Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3
","io.kubernetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-7f6pn\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"428a8b5a-cbc3-4337-a768-abfa8e11fbc7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-7f6pn_428a8b5a-cbc3-4337-a768-abfa8e11fbc7/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6341347f03f3a6f550ce2ff016fa8590b3efeacf6b85f98a0926bcd3a6237c4d/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-7f6pn_kube-system_428a8b5a-cbc3-4337-a768-abfa8e11fbc7_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a7fae572050639cd14eb4843e998d7248594b15698ba646b8e106482d55729a2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a7fae572050639
cd14eb4843e998d7248594b15698ba646b8e106482d55729a2","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-7f6pn_kube-system_428a8b5a-cbc3-4337-a768-abfa8e11fbc7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/containers/coredns/d0cbaa0f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernet
es.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/volumes/kubernetes.io~projected/kube-api-access-wzssl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-7f6pn","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"428a8b5a-cbc3-4337-a768-abfa8e11fbc7","kubernetes.io/config.seen":"2024-09-16T11:07:10.534621396Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c84610c54784f71c1242c0b673e61cc60bf9090cfff27c3250f5e31953c7eb7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6c84610c54784f71c1242c0b673e61cc60bf9090cfff27c3250f5e31953c7eb7/userdata","rootfs":"/var/lib/containers/storage/overlay/3146b0ea422f43904c16cf197f1e817b6bcdc71a4564133ee86222400f2df045/merged","created":"2024-09-16T11:06:17.807918701Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.containe
r.hash":"d1900d79","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6c84610c54784f71c1242c0b673e61cc60bf9090cfff27c3250f5e31953c7eb7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:06:17.743872767Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db94
1b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"69c69e3faf8d4d357c9fd944bee942e9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-cert-expiration-997173_69c69e3faf8d4d357c9fd944bee942e9/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3146b0ea422f43904c16cf197f1e817b6bcdc71a4564133ee86222400f2df045/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-997173_kube-system_69c69e3faf8d4d357c9fd944bee942e9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bf040342600bd2843d356f4336baa164c5fa01d8a9a66767e53ced1ac31885ae/userdata/resolv.conf","io.kubern
etes.cri-o.SandboxID":"bf040342600bd2843d356f4336baa164c5fa01d8a9a66767e53ced1ac31885ae","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-cert-expiration-997173_kube-system_69c69e3faf8d4d357c9fd944bee942e9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/69c69e3faf8d4d357c9fd944bee942e9/containers/kube-controller-manager/6c9d846e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/69c69e3faf8d4d357c9fd944bee942e9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/
ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-cert-expiration-997
173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"69c69e3faf8d4d357c9fd944bee942e9","kubernetes.io/config.hash":"69c69e3faf8d4d357c9fd944bee942e9","kubernetes.io/config.seen":"2024-09-16T11:06:17.231007365Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81220dc49cd84d4570213a65df3f108cb87eb2693bb5eea2e9446a3cb19040d6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/81220dc49cd84d4570213a65df3f108cb87eb2693bb5eea2e9446a3cb19040d6/userdata","rootfs":"/var/lib/containers/storage/overlay/497b5f7eb4655c3e775686dd76549dfd775b1a3503f4ec48002098644cd710d8/merged","created":"2024-09-16T11:06:17.805752377Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container
.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"81220dc49cd84d4570213a65df3f108cb87eb2693bb5eea2e9446a3cb19040d6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:06:17.744478252Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-
system\",\"io.kubernetes.pod.uid\":\"72228e30963e88f940ba48185bef9911\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-cert-expiration-997173_72228e30963e88f940ba48185bef9911/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/497b5f7eb4655c3e775686dd76549dfd775b1a3503f4ec48002098644cd710d8/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-cert-expiration-997173_kube-system_72228e30963e88f940ba48185bef9911_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f5ecb96f973afcac3e71d2753898d1ef47db4a27df5431973bf213bbd4baa1ad/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f5ecb96f973afcac3e71d2753898d1ef47db4a27df5431973bf213bbd4baa1ad","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-cert-expiration-997173_kube-system_72228e30963e88f940ba48185bef9911_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"
false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72228e30963e88f940ba48185bef9911/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72228e30963e88f940ba48185bef9911/containers/kube-scheduler/65220eed\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-cert-expiration-997173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72228e30963e88f940ba48185bef9911","kubernetes.io/config.hash":"72228e30963e88f940ba48185bef9911","kubernetes.io/config.seen":"2024-09-16T11:06:17.231008957Z","ku
bernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"936f15644c4839f05a4b65664a285895b2b17c2383a569bd4b61d3e587ea3ffb","pid":3048,"status":"running","bundle":"/run/containers/storage/overlay-containers/936f15644c4839f05a4b65664a285895b2b17c2383a569bd4b61d3e587ea3ffb/userdata","rootfs":"/var/lib/containers/storage/overlay/0a28cfe3fa2dcd0f9265474171327ef50d5886986f20235f3e217532458e17dd/merged","created":"2024-09-16T11:09:36.104198921Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMess
agePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"936f15644c4839f05a4b65664a285895b2b17c2383a569bd4b61d3e587ea3ffb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.910710363Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"87d13c11a28a94471905c76161470236\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-cert-expiration-997173_87d13c11a28a94471905c76161470236/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserv
er\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0a28cfe3fa2dcd0f9265474171327ef50d5886986f20235f3e217532458e17dd/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-cert-expiration-997173_kube-system_87d13c11a28a94471905c76161470236_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/752b4d3ee7ad2bc67d621eb65f2b18b833156c6374cdd963f1bb6a647b3cdc79/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"752b4d3ee7ad2bc67d621eb65f2b18b833156c6374cdd963f1bb6a647b3cdc79","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-cert-expiration-997173_kube-system_87d13c11a28a94471905c76161470236_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/87d13c11a28a94471905c76161470236/containers/kube-apiserver/a3827396\",
\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/87d13c11a28a94471905c76161470236/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kuberne
tes.pod.name":"kube-apiserver-cert-expiration-997173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"87d13c11a28a94471905c76161470236","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.103.2:8443","kubernetes.io/config.hash":"87d13c11a28a94471905c76161470236","kubernetes.io/config.seen":"2024-09-16T11:06:17.231004825Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2cc93bb5ccc5229eda19245a14edd8372ee5d43056b9bef4ec99fce6e6a6d3e","pid":2844,"status":"running","bundle":"/run/containers/storage/overlay-containers/b2cc93bb5ccc5229eda19245a14edd8372ee5d43056b9bef4ec99fce6e6a6d3e/userdata","rootfs":"/var/lib/containers/storage/overlay/1641d1d116f1308488eab1178b46bc6c76973b95d52d1673fb0b38b52c13889a/merged","created":"2024-09-16T11:09:35.725278233Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler",
"io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b2cc93bb5ccc5229eda19245a14edd8372ee5d43056b9bef4ec99fce6e6a6d3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.682370151Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.conta
iner.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72228e30963e88f940ba48185bef9911\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-cert-expiration-997173_72228e30963e88f940ba48185bef9911/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1641d1d116f1308488eab1178b46bc6c76973b95d52d1673fb0b38b52c13889a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-cert-expiration-997173_kube-system_72228e30963e88f940ba48185bef9911_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f5ecb96f973afcac3e71d2753898d1ef47db4a27df5431973bf213bbd4baa1ad/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f5ecb96f973afcac3e71d2753898d1ef47db4a27df5431973bf213bbd4baa1ad","io.kubernetes.cri-o.SandboxName":"k8s_kube-s
cheduler-cert-expiration-997173_kube-system_72228e30963e88f940ba48185bef9911_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72228e30963e88f940ba48185bef9911/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72228e30963e88f940ba48185bef9911/containers/kube-scheduler/ae60a9dd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-cert-expiration-997173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72228e30
963e88f940ba48185bef9911","kubernetes.io/config.hash":"72228e30963e88f940ba48185bef9911","kubernetes.io/config.seen":"2024-09-16T11:06:17.231008957Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bc94f1b0be43901fcec2ae437a70c2d09c3df0dfdcd80d972683e6df5dc20c97","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/bc94f1b0be43901fcec2ae437a70c2d09c3df0dfdcd80d972683e6df5dc20c97/userdata","rootfs":"/var/lib/containers/storage/overlay/7ae1bacfd81ae5e86c055b43629ab968eafb966ce8e411da8c681f73ba35a580/merged","created":"2024-09-16T11:06:29.757137326Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e80daca3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e80daca3\",\"io.kubernetes.c
ontainer.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"bc94f1b0be43901fcec2ae437a70c2d09c3df0dfdcd80d972683e6df5dc20c97","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:06:29.718507872Z","io.kubernetes.cri-o.Image":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri-o.ImageRef":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-p4292\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c48a8fc-3bbc-4ec6-ad42-f71607ed50df\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-p4292_7c48
a8fc-3bbc-4ec6-ad42-f71607ed50df/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7ae1bacfd81ae5e86c055b43629ab968eafb966ce8e411da8c681f73ba35a580/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-p4292_kube-system_7c48a8fc-3bbc-4ec6-ad42-f71607ed50df_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a3f65f9507815b1d78f91c6af7ea8b28fa8849288d7395460d109d8cb4d23901/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a3f65f9507815b1d78f91c6af7ea8b28fa8849288d7395460d109d8cb4d23901","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-p4292_kube-system_7c48a8fc-3bbc-4ec6-ad42-f71607ed50df_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propag
ation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7c48a8fc-3bbc-4ec6-ad42-f71607ed50df/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7c48a8fc-3bbc-4ec6-ad42-f71607ed50df/containers/kindnet-cni/464d782a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c48a8fc-3bbc-4ec6-ad42-f71607ed50df/volumes/kubernetes.io~projected/kube-api-access-fv4h4\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-p4292","io.kuber
netes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c48a8fc-3bbc-4ec6-ad42-f71607ed50df","kubernetes.io/config.seen":"2024-09-16T11:06:28.135148977Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c999c8ecdad5e9176d92a1de3513a3d178dafcba46096ac24478d1fd2fa7411f","pid":2978,"status":"running","bundle":"/run/containers/storage/overlay-containers/c999c8ecdad5e9176d92a1de3513a3d178dafcba46096ac24478d1fd2fa7411f/userdata","rootfs":"/var/lib/containers/storage/overlay/e931698fa65504ad07c48cfb80a6baf66211ff62083bf68fd2a824c1a22ef01a/merged","created":"2024-09-16T11:09:36.009999709Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotat
ions":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c999c8ecdad5e9176d92a1de3513a3d178dafcba46096ac24478d1fd2fa7411f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.826567755Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":
\"69c69e3faf8d4d357c9fd944bee942e9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-cert-expiration-997173_69c69e3faf8d4d357c9fd944bee942e9/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e931698fa65504ad07c48cfb80a6baf66211ff62083bf68fd2a824c1a22ef01a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-997173_kube-system_69c69e3faf8d4d357c9fd944bee942e9_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bf040342600bd2843d356f4336baa164c5fa01d8a9a66767e53ced1ac31885ae/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bf040342600bd2843d356f4336baa164c5fa01d8a9a66767e53ced1ac31885ae","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-cert-expiration-997173_kube-system_69c69e3faf8d4d357c9fd944bee942e9_0","io.kubernetes.cri-o.SeccompProfilePath"
:"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/69c69e3faf8d4d357c9fd944bee942e9/containers/kube-controller-manager/ef98fb8b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/69c69e3faf8d4d357c9fd944bee942e9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"
container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-cert-expiration-997173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"69c69e3faf8d4d357c9fd944bee942e9","kubernetes.io/config.hash":"69c69e3faf8d4d357c9fd944bee942e9","kubernetes.io/config.seen":"2024-09-16T11:0
6:17.231007365Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d8975288b029d778c48fc0db3538a03eba60bd95032a0ef13edcc1b910df53ff","pid":3067,"status":"running","bundle":"/run/containers/storage/overlay-containers/d8975288b029d778c48fc0db3538a03eba60bd95032a0ef13edcc1b910df53ff/userdata","rootfs":"/var/lib/containers/storage/overlay/db8a7da14670fe9d1c56e6ffef1548c75e709f6063f09197d94d85a1d79aed93/merged","created":"2024-09-16T11:09:36.108687419Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termi
nationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d8975288b029d778c48fc0db3538a03eba60bd95032a0ef13edcc1b910df53ff","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:09:35.923486289Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-cert-expiration-997173\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"963e0991d0bafadafaba83e1b06e8626\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-cert-expiration-997173_963e0991d0bafadafaba83e1b06e8626/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPo
int":"/var/lib/containers/storage/overlay/db8a7da14670fe9d1c56e6ffef1548c75e709f6063f09197d94d85a1d79aed93/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-cert-expiration-997173_kube-system_963e0991d0bafadafaba83e1b06e8626_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/88e0bc927dbc84923682a1a5c789b752c550f98460c04863e5e279f40fc27461/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"88e0bc927dbc84923682a1a5c789b752c550f98460c04863e5e279f40fc27461","io.kubernetes.cri-o.SandboxName":"k8s_etcd-cert-expiration-997173_kube-system_963e0991d0bafadafaba83e1b06e8626_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/963e0991d0bafadafaba83e1b06e8626/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"hos
t_path\":\"/var/lib/kubelet/pods/963e0991d0bafadafaba83e1b06e8626/containers/etcd/7b3694bb\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-cert-expiration-997173","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"963e0991d0bafadafaba83e1b06e8626","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.103.2:2379","kubernetes.io/config.hash":"963e0991d0bafadafaba83e1b06e8626","kubernetes.io/config.seen":"2024-09-16T11:06:17.231000653Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e5cbe79ac64f8ab8ac1a5379d59f2ccb90ae142d73dd3e0e517cf
aa973a9e678","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e5cbe79ac64f8ab8ac1a5379d59f2ccb90ae142d73dd3e0e517cfaa973a9e678/userdata","rootfs":"/var/lib/containers/storage/overlay/b135b0a0a3f4cc747e0f7ce38f0cb462e85e6b373dbbb091f8f2a8585dadd4df/merged","created":"2024-09-16T11:07:11.226872079Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\
\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e5cbe79ac64f8ab8ac1a5379d59f2ccb90ae142d73dd3e0e517cfaa973a9e678","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:07:11.194146116Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.contain
er.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-7f6pn\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"428a8b5a-cbc3-4337-a768-abfa8e11fbc7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-7f6pn_428a8b5a-cbc3-4337-a768-abfa8e11fbc7/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b135b0a0a3f4cc747e0f7ce38f0cb462e85e6b373dbbb091f8f2a8585dadd4df/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-7f6pn_kube-system_428a8b5a-cbc3-4337-a768-abfa8e11fbc7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a7fae572050639cd14eb4843e998d7248594b15698ba646b8e106482d55729a2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a7fae572050639cd14eb4843e998d7248594b15698ba646b8e106482d55729a2","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-7f6pn_kube-system_428a8b5a-cbc3-4337-a768-abfa8e11fbc7_0",
"io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/containers/coredns/9dfdf10e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/428a8b5a-cbc3-4337-a768-abfa8e11fbc7/volumes/kubernetes.io~projected/kube-api-access-wzssl\",\"readonly\":t
rue,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-7f6pn","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"428a8b5a-cbc3-4337-a768-abfa8e11fbc7","kubernetes.io/config.seen":"2024-09-16T11:07:10.534621396Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fd61827c1bb0c44e91503fa5609ebaec70ec820e5181cb915d8f629f17587823","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fd61827c1bb0c44e91503fa5609ebaec70ec820e5181cb915d8f629f17587823/userdata","rootfs":"/var/lib/containers/storage/overlay/d20ded1f7a28c62a2bf8c836f9de3dab09c1424657617480db45346b2ba318f6/merged","created":"2024-09-16T11:06:29.749689859Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termina
tion-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fd61827c1bb0c44e91503fa5609ebaec70ec820e5181cb915d8f629f17587823","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:06:29.712801486Z","io.kubernetes.cri-o.Image":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-f6mxh\",\"io.kubernetes.pod.namespace\":\"
kube-system\",\"io.kubernetes.pod.uid\":\"1ee793da-2cff-43cd-92d7-56628deec6f7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-f6mxh_1ee793da-2cff-43cd-92d7-56628deec6f7/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d20ded1f7a28c62a2bf8c836f9de3dab09c1424657617480db45346b2ba318f6/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-f6mxh_kube-system_1ee793da-2cff-43cd-92d7-56628deec6f7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2f5bae83408ad24187f4eddb82cd3da68ef5d78dd8ae4ba0fc42d2cbace4acf5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2f5bae83408ad24187f4eddb82cd3da68ef5d78dd8ae4ba0fc42d2cbace4acf5","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-f6mxh_kube-system_1ee793da-2cff-43cd-92d7-56628deec6f7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kub
ernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-56628deec6f7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-56628deec6f7/containers/kube-proxy/e366dd52\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-56628deec6f7/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceac
count\",\"host_path\":\"/var/lib/kubelet/pods/1ee793da-2cff-43cd-92d7-56628deec6f7/volumes/kubernetes.io~projected/kube-api-access-wctqk\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-f6mxh","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1ee793da-2cff-43cd-92d7-56628deec6f7","kubernetes.io/config.seen":"2024-09-16T11:06:28.136066196Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0916 11:09:36.942219  268041 cri.go:126] list returned 15 containers
	I0916 11:09:36.942235  268041 cri.go:129] container: {ID:04a2ce1c135b3cadc447d08fe13b01f572cf6bc4e99d56b8164554b3584d245b Status:running}
	I0916 11:09:36.942270  268041 cri.go:135] skipping {04a2ce1c135b3cadc447d08fe13b01f572cf6bc4e99d56b8164554b3584d245b running}: state = "running", want "paused"
	I0916 11:09:36.942280  268041 cri.go:129] container: {ID:0d51197c27265c7561b8c0714503ee88f5842b3e8e1cbcc52853987feeb84c3e Status:stopped}
	I0916 11:09:36.942285  268041 cri.go:135] skipping {0d51197c27265c7561b8c0714503ee88f5842b3e8e1cbcc52853987feeb84c3e stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942288  268041 cri.go:129] container: {ID:15b134810b77ab2a4d6a131a4c2bc2a4223f35ac502354fb60da4b493bc0328e Status:running}
	I0916 11:09:36.942292  268041 cri.go:135] skipping {15b134810b77ab2a4d6a131a4c2bc2a4223f35ac502354fb60da4b493bc0328e running}: state = "running", want "paused"
	I0916 11:09:36.942295  268041 cri.go:129] container: {ID:28c73b5d6532f8d45c2f07d6363785221db591d428f5b6b58605ab5c6dd56e64 Status:stopped}
	I0916 11:09:36.942299  268041 cri.go:135] skipping {28c73b5d6532f8d45c2f07d6363785221db591d428f5b6b58605ab5c6dd56e64 stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942303  268041 cri.go:129] container: {ID:4821234946ec50a7f35bb9336beb039e237b1aabfc8247ef25ea8393d30783e1 Status:stopped}
	I0916 11:09:36.942307  268041 cri.go:135] skipping {4821234946ec50a7f35bb9336beb039e237b1aabfc8247ef25ea8393d30783e1 stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942312  268041 cri.go:129] container: {ID:525f9506ca945c440f3a5a8633526e441bb62faddf984060457bc27d922958c9 Status:running}
	I0916 11:09:36.942317  268041 cri.go:135] skipping {525f9506ca945c440f3a5a8633526e441bb62faddf984060457bc27d922958c9 running}: state = "running", want "paused"
	I0916 11:09:36.942321  268041 cri.go:129] container: {ID:6c84610c54784f71c1242c0b673e61cc60bf9090cfff27c3250f5e31953c7eb7 Status:stopped}
	I0916 11:09:36.942326  268041 cri.go:135] skipping {6c84610c54784f71c1242c0b673e61cc60bf9090cfff27c3250f5e31953c7eb7 stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942329  268041 cri.go:129] container: {ID:81220dc49cd84d4570213a65df3f108cb87eb2693bb5eea2e9446a3cb19040d6 Status:stopped}
	I0916 11:09:36.942338  268041 cri.go:135] skipping {81220dc49cd84d4570213a65df3f108cb87eb2693bb5eea2e9446a3cb19040d6 stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942341  268041 cri.go:129] container: {ID:936f15644c4839f05a4b65664a285895b2b17c2383a569bd4b61d3e587ea3ffb Status:running}
	I0916 11:09:36.942345  268041 cri.go:135] skipping {936f15644c4839f05a4b65664a285895b2b17c2383a569bd4b61d3e587ea3ffb running}: state = "running", want "paused"
	I0916 11:09:36.942348  268041 cri.go:129] container: {ID:b2cc93bb5ccc5229eda19245a14edd8372ee5d43056b9bef4ec99fce6e6a6d3e Status:running}
	I0916 11:09:36.942352  268041 cri.go:135] skipping {b2cc93bb5ccc5229eda19245a14edd8372ee5d43056b9bef4ec99fce6e6a6d3e running}: state = "running", want "paused"
	I0916 11:09:36.942355  268041 cri.go:129] container: {ID:bc94f1b0be43901fcec2ae437a70c2d09c3df0dfdcd80d972683e6df5dc20c97 Status:stopped}
	I0916 11:09:36.942359  268041 cri.go:135] skipping {bc94f1b0be43901fcec2ae437a70c2d09c3df0dfdcd80d972683e6df5dc20c97 stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942363  268041 cri.go:129] container: {ID:c999c8ecdad5e9176d92a1de3513a3d178dafcba46096ac24478d1fd2fa7411f Status:running}
	I0916 11:09:36.942367  268041 cri.go:135] skipping {c999c8ecdad5e9176d92a1de3513a3d178dafcba46096ac24478d1fd2fa7411f running}: state = "running", want "paused"
	I0916 11:09:36.942373  268041 cri.go:129] container: {ID:d8975288b029d778c48fc0db3538a03eba60bd95032a0ef13edcc1b910df53ff Status:running}
	I0916 11:09:36.942378  268041 cri.go:135] skipping {d8975288b029d778c48fc0db3538a03eba60bd95032a0ef13edcc1b910df53ff running}: state = "running", want "paused"
	I0916 11:09:36.942381  268041 cri.go:129] container: {ID:e5cbe79ac64f8ab8ac1a5379d59f2ccb90ae142d73dd3e0e517cfaa973a9e678 Status:stopped}
	I0916 11:09:36.942386  268041 cri.go:135] skipping {e5cbe79ac64f8ab8ac1a5379d59f2ccb90ae142d73dd3e0e517cfaa973a9e678 stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942389  268041 cri.go:129] container: {ID:fd61827c1bb0c44e91503fa5609ebaec70ec820e5181cb915d8f629f17587823 Status:stopped}
	I0916 11:09:36.942393  268041 cri.go:135] skipping {fd61827c1bb0c44e91503fa5609ebaec70ec820e5181cb915d8f629f17587823 stopped}: state = "stopped", want "paused"
	I0916 11:09:36.942450  268041 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:09:37.002034  268041 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:09:37.002046  268041 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:09:37.002120  268041 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:09:37.014943  268041 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:09:37.016160  268041 kubeconfig.go:125] found "cert-expiration-997173" server: "https://192.168.103.2:8443"
	I0916 11:09:37.019040  268041 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:09:37.032002  268041 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 11:09:37.032027  268041 kubeadm.go:597] duration metric: took 29.976474ms to restartPrimaryControlPlane
	I0916 11:09:37.032035  268041 kubeadm.go:394] duration metric: took 300.410748ms to StartCluster
	I0916 11:09:37.032054  268041 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:37.032135  268041 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:09:37.033492  268041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:37.033794  268041 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:09:37.034029  268041 config.go:182] Loaded profile config "cert-expiration-997173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:09:37.034009  268041 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:37.034099  268041 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-997173"
	I0916 11:09:37.034115  268041 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-997173"
	W0916 11:09:37.034121  268041 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:09:37.034148  268041 host.go:66] Checking if "cert-expiration-997173" exists ...
	I0916 11:09:37.034186  268041 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-997173"
	I0916 11:09:37.034204  268041 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-997173"
	I0916 11:09:37.034530  268041 cli_runner.go:164] Run: docker container inspect cert-expiration-997173 --format={{.State.Status}}
	I0916 11:09:37.034610  268041 cli_runner.go:164] Run: docker container inspect cert-expiration-997173 --format={{.State.Status}}
	I0916 11:09:37.037899  268041 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:37.039468  268041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:37.059727  268041 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-997173"
	W0916 11:09:37.059740  268041 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:09:37.059765  268041 host.go:66] Checking if "cert-expiration-997173" exists ...
	I0916 11:09:37.060028  268041 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.045837495Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.045831237Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748],Size_:89437508,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=1c592d56-5e7f-4c92-a532-72d25b1a56c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.046368235Z" level=info msg="Creating container: kube-system/kube-controller-manager-kubernetes-upgrade-749637/kube-controller-manager" id=db718327-9f9e-407c-8350-0ce651056441 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.046451580Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.046378338Z" level=info msg="Creating container: kube-system/kube-apiserver-kubernetes-upgrade-749637/kube-apiserver" id=be9f3e77-181f-4d1f-80d4-2ef4a0517a8d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.046565121Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.046612793Z" level=info msg="Ran pod sandbox a47d8e7a21d6aaa2884a5a612163731c0dd0eb898ffbbad9a326693f5b53663e with infra container: kube-system/kube-scheduler-kubernetes-upgrade-749637/POD" id=2093c3b9-d424-479c-877b-681f57ef3b29 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.047312518Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.31.1" id=3cb4cac0-7e1b-4c27-a825-7079b7dea8bd name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.052278378Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,RepoTags:[registry.k8s.io/kube-scheduler:v1.31.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0 registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8],Size_:68420934,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=3cb4cac0-7e1b-4c27-a825-7079b7dea8bd name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.052959267Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.31.1" id=3f4c763b-e7af-4366-ae03-a4fa97387308 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.053130452Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,RepoTags:[registry.k8s.io/kube-scheduler:v1.31.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0 registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8],Size_:68420934,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=3f4c763b-e7af-4366-ae03-a4fa97387308 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.054615457Z" level=info msg="Creating container: kube-system/kube-scheduler-kubernetes-upgrade-749637/kube-scheduler" id=69a91e69-3f31-49f8-a82d-57240bb2ed46 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.054717798Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.124299217Z" level=info msg="Created container aa9b5f3c69a47038e0b6734f489a4a658a2afef53240da6eb18b3a20cc02e7ab: kube-system/etcd-kubernetes-upgrade-749637/etcd" id=1d7f1aaf-02bc-4a58-b26d-b705af149d0d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.125114681Z" level=info msg="Starting container: aa9b5f3c69a47038e0b6734f489a4a658a2afef53240da6eb18b3a20cc02e7ab" id=2ac998dc-e80f-49bd-83bc-2b2c1858cabe name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.133992926Z" level=info msg="Started container" PID=7536 containerID=aa9b5f3c69a47038e0b6734f489a4a658a2afef53240da6eb18b3a20cc02e7ab description=kube-system/etcd-kubernetes-upgrade-749637/etcd id=2ac998dc-e80f-49bd-83bc-2b2c1858cabe name=/runtime.v1.RuntimeService/StartContainer sandboxID=50486fbf59f5a950efa16fbc29e44c137c83753060ad438dc0fedeaf228451ea
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.200108397Z" level=info msg="Created container 4f7028b575e35eb9af39f4ddad2b8533eaa506defe3e3e695427dca910d46419: kube-system/kube-scheduler-kubernetes-upgrade-749637/kube-scheduler" id=69a91e69-3f31-49f8-a82d-57240bb2ed46 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.200696541Z" level=info msg="Starting container: 4f7028b575e35eb9af39f4ddad2b8533eaa506defe3e3e695427dca910d46419" id=57af9f3f-8c6d-48e3-806e-82c5276a3ba9 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.206431164Z" level=info msg="Created container d8650bc81d4852b2667cebb1a4fef9f7a5daabf967d9c03a85863c7715bea21f: kube-system/kube-controller-manager-kubernetes-upgrade-749637/kube-controller-manager" id=db718327-9f9e-407c-8350-0ce651056441 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.207069452Z" level=info msg="Created container 29ddbac85fd8723100d36397592a85b58ffc7bb4b40ad2c4b3e9f761a762f3eb: kube-system/kube-apiserver-kubernetes-upgrade-749637/kube-apiserver" id=be9f3e77-181f-4d1f-80d4-2ef4a0517a8d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.207124719Z" level=info msg="Starting container: d8650bc81d4852b2667cebb1a4fef9f7a5daabf967d9c03a85863c7715bea21f" id=ee4d2e9e-3421-4179-9395-b11875404240 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.207534270Z" level=info msg="Starting container: 29ddbac85fd8723100d36397592a85b58ffc7bb4b40ad2c4b3e9f761a762f3eb" id=74f26ffe-ac0d-4e9d-9d61-47583a99a73f name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.208317432Z" level=info msg="Started container" PID=7576 containerID=4f7028b575e35eb9af39f4ddad2b8533eaa506defe3e3e695427dca910d46419 description=kube-system/kube-scheduler-kubernetes-upgrade-749637/kube-scheduler id=57af9f3f-8c6d-48e3-806e-82c5276a3ba9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a47d8e7a21d6aaa2884a5a612163731c0dd0eb898ffbbad9a326693f5b53663e
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.215100886Z" level=info msg="Started container" PID=7593 containerID=d8650bc81d4852b2667cebb1a4fef9f7a5daabf967d9c03a85863c7715bea21f description=kube-system/kube-controller-manager-kubernetes-upgrade-749637/kube-controller-manager id=ee4d2e9e-3421-4179-9395-b11875404240 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13977a3c461d08fde8e8592a8ed8549c74d749a1340478daf28d52d4f8f2bda3
	Sep 16 11:09:30 kubernetes-upgrade-749637 crio[560]: time="2024-09-16 11:09:30.216097148Z" level=info msg="Started container" PID=7586 containerID=29ddbac85fd8723100d36397592a85b58ffc7bb4b40ad2c4b3e9f761a762f3eb description=kube-system/kube-apiserver-kubernetes-upgrade-749637/kube-apiserver id=74f26ffe-ac0d-4e9d-9d61-47583a99a73f name=/runtime.v1.RuntimeService/StartContainer sandboxID=a18c854d609a086a7e0dbcbd10d1b4fa08eef421ab5c5995eedadc6a78149e40
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29ddbac85fd87       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            5                   a18c854d609a0       kube-apiserver-kubernetes-upgrade-749637
	d8650bc81d485       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   0                   13977a3c461d0       kube-controller-manager-kubernetes-upgrade-749637
	4f7028b575e35       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            0                   a47d8e7a21d6a       kube-scheduler-kubernetes-upgrade-749637
	aa9b5f3c69a47       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      0                   50486fbf59f5a       etcd-kubernetes-upgrade-749637
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-749637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-749637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=kubernetes-upgrade-749637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_09_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:09:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-749637
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:09:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:09:35 +0000   Mon, 16 Sep 2024 11:09:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:09:35 +0000   Mon, 16 Sep 2024 11:09:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:09:35 +0000   Mon, 16 Sep 2024 11:09:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 16 Sep 2024 11:09:35 +0000   Mon, 16 Sep 2024 11:09:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    kubernetes-upgrade-749637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 604680639e9f47fbb0ad7e727a6c89a6
	  System UUID:                27f5844a-793a-4b3a-8445-19299f023810
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-749637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2s
	  kube-system                 kube-apiserver-kubernetes-upgrade-749637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-749637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-749637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%)   0 (0%)
	  memory             100Mi (0%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age              From     Message
	  ----     ------                   ----             ----     -------
	  Normal   Starting                 8s               kubelet  Starting kubelet.
	  Warning  CgroupV1                 8s               kubelet  Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8s (x5 over 8s)  kubelet  Node kubernetes-upgrade-749637 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x5 over 8s)  kubelet  Node kubernetes-upgrade-749637 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x5 over 8s)  kubelet  Node kubernetes-upgrade-749637 status is now: NodeHasSufficientPID
	  Normal   Starting                 3s               kubelet  Starting kubelet.
	  Warning  CgroupV1                 3s               kubelet  Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2s               kubelet  Node kubernetes-upgrade-749637 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2s               kubelet  Node kubernetes-upgrade-749637 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2s               kubelet  Node kubernetes-upgrade-749637 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 10:58] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000008] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000013] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000137] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +1.004052] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [aa9b5f3c69a47038e0b6734f489a4a658a2afef53240da6eb18b3a20cc02e7ab] <==
	{"level":"info","ts":"2024-09-16T11:09:30.301362Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:09:30.301473Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:09:30.301537Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:09:30.301614Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:09:30.301645Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:09:30.418775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:09:30.418824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:09:30.418845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2024-09-16T11:09:30.418859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:09:30.418867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:09:30.418878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:09:30.418887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:09:30.419984Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:09:30.420033Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:09:30.419986Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-749637 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:09:30.420011Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:09:30.420274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:09:30.420323Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:09:30.420839Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:09:30.420929Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:09:30.420968Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:09:30.421427Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:09:30.421511Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:09:30.422285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-09-16T11:09:30.422650Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:09:37 up 51 min,  0 users,  load average: 1.18, 2.33, 1.75
	Linux kubernetes-upgrade-749637 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [29ddbac85fd8723100d36397592a85b58ffc7bb4b40ad2c4b3e9f761a762f3eb] <==
	I0916 11:09:32.694395       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:09:32.694406       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:09:32.694414       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:09:32.694421       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:09:32.694648       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:09:32.694689       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:09:32.700596       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:09:32.700621       1 policy_source.go:224] refreshing policies
	E0916 11:09:32.749561       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E0916 11:09:32.766730       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:09:32.797071       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:09:32.970018       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:09:33.548815       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:09:33.555487       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:09:33.555508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:09:34.053493       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:09:34.095015       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:09:34.205379       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:09:34.212993       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0916 11:09:34.214142       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:09:34.218389       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:09:34.615269       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:09:35.002442       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:09:35.016677       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:09:35.027149       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [d8650bc81d4852b2667cebb1a4fef9f7a5daabf967d9c03a85863c7715bea21f] <==
	I0916 11:09:37.261924       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0916 11:09:37.261960       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0916 11:09:37.262008       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0916 11:09:37.262141       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0916 11:09:37.262174       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0916 11:09:37.262219       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0916 11:09:37.262507       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0916 11:09:37.262542       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0916 11:09:37.262575       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0916 11:09:37.262797       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0916 11:09:37.262865       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0916 11:09:37.262883       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0916 11:09:37.262902       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0916 11:09:37.414581       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I0916 11:09:37.414812       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0916 11:09:37.414830       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0916 11:09:37.414840       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0916 11:09:37.563243       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I0916 11:09:37.563393       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0916 11:09:37.563412       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0916 11:09:37.714466       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I0916 11:09:37.714488       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0916 11:09:37.714502       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0916 11:09:37.862295       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I0916 11:09:37.862371       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	
	
	==> kube-scheduler [4f7028b575e35eb9af39f4ddad2b8533eaa506defe3e3e695427dca910d46419] <==
	W0916 11:09:32.708356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:09:32.709405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:32.708371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:09:32.709453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:32.708797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:09:32.709487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:33.524398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:09:33.524439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:33.679915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:09:33.679957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:33.695980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 11:09:33.696021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:09:33.696026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 11:09:33.696064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:33.708779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:09:33.708818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:33.749244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:09:33.749282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:33.845170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:09:33.845238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:33.897904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:09:33.897955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:09:34.075263       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:09:34.075398       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:09:36.700217       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.034493    7734 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.041881    7734 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.041985    7734 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074499    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e758ee1659beea31dfaa900baae94ab6-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-749637\" (UID: \"e758ee1659beea31dfaa900baae94ab6\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074547    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cf4f9f00a0746c6e1b1cbcba96125cd-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-749637\" (UID: \"0cf4f9f00a0746c6e1b1cbcba96125cd\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074571    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d230197a8fc6e523511d6128da4c8620-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-749637\" (UID: \"d230197a8fc6e523511d6128da4c8620\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074588    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d230197a8fc6e523511d6128da4c8620-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-749637\" (UID: \"d230197a8fc6e523511d6128da4c8620\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074608    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d230197a8fc6e523511d6128da4c8620-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-749637\" (UID: \"d230197a8fc6e523511d6128da4c8620\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074663    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d230197a8fc6e523511d6128da4c8620-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-749637\" (UID: \"d230197a8fc6e523511d6128da4c8620\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074732    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d230197a8fc6e523511d6128da4c8620-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-749637\" (UID: \"d230197a8fc6e523511d6128da4c8620\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074775    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/635d7969c34a26dd1692025136ed0811-etcd-data\") pod \"etcd-kubernetes-upgrade-749637\" (UID: \"635d7969c34a26dd1692025136ed0811\") " pod="kube-system/etcd-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074811    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cf4f9f00a0746c6e1b1cbcba96125cd-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-749637\" (UID: \"0cf4f9f00a0746c6e1b1cbcba96125cd\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074857    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cf4f9f00a0746c6e1b1cbcba96125cd-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-749637\" (UID: \"0cf4f9f00a0746c6e1b1cbcba96125cd\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074884    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cf4f9f00a0746c6e1b1cbcba96125cd-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-749637\" (UID: \"0cf4f9f00a0746c6e1b1cbcba96125cd\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074906    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d230197a8fc6e523511d6128da4c8620-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-749637\" (UID: \"d230197a8fc6e523511d6128da4c8620\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074930    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d230197a8fc6e523511d6128da4c8620-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-749637\" (UID: \"d230197a8fc6e523511d6128da4c8620\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074953    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/635d7969c34a26dd1692025136ed0811-etcd-certs\") pod \"etcd-kubernetes-upgrade-749637\" (UID: \"635d7969c34a26dd1692025136ed0811\") " pod="kube-system/etcd-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.074975    7734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cf4f9f00a0746c6e1b1cbcba96125cd-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-749637\" (UID: \"0cf4f9f00a0746c6e1b1cbcba96125cd\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.862848    7734 apiserver.go:52] "Watching apiserver"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.871583    7734 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: E0916 11:09:35.941707    7734 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-749637\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: E0916 11:09:35.943297    7734 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-749637\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-749637"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.993933    7734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-749637" podStartSLOduration=1.9939047429999999 podStartE2EDuration="1.993904743s" podCreationTimestamp="2024-09-16 11:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:35.956668543 +0000 UTC m=+1.158038890" watchObservedRunningTime="2024-09-16 11:09:35.993904743 +0000 UTC m=+1.195275087"
	Sep 16 11:09:35 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:35.994122    7734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-749637" podStartSLOduration=0.994111039 podStartE2EDuration="994.111039ms" podCreationTimestamp="2024-09-16 11:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:35.969447698 +0000 UTC m=+1.170818044" watchObservedRunningTime="2024-09-16 11:09:35.994111039 +0000 UTC m=+1.195481386"
	Sep 16 11:09:36 kubernetes-upgrade-749637 kubelet[7734]: I0916 11:09:36.011983    7734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-749637" podStartSLOduration=1.011957523 podStartE2EDuration="1.011957523s" podCreationTimestamp="2024-09-16 11:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:36.01020848 +0000 UTC m=+1.211578828" watchObservedRunningTime="2024-09-16 11:09:36.011957523 +0000 UTC m=+1.213327870"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-749637 -n kubernetes-upgrade-749637
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-749637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-749637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (564.789µs)
helpers_test.go:263: kubectl --context kubernetes-upgrade-749637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:175: Cleaning up "kubernetes-upgrade-749637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-749637
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-749637: (2.230377458s)
--- FAIL: TestKubernetesUpgrade (316.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (1800.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-838467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context auto-838467 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (482.266µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:329: TestNetworkPlugins/group/auto/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/auto/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p auto-838467 -n auto-838467
net_test.go:163: TestNetworkPlugins/group/auto/NetCatPod: showing logs for failed pods as of 2024-09-16 11:37:57.755781597 +0000 UTC m=+4535.602563020
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/auto/NetCatPod (1800.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (1800.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-838467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context kindnet-838467 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (565.61µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:25:02.443858   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/kindnet/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kindnet-838467 -n kindnet-838467
net_test.go:163: TestNetworkPlugins/group/kindnet/NetCatPod: showing logs for failed pods as of 2024-09-16 11:39:11.403181282 +0000 UTC m=+4609.249962686
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kindnet/NetCatPod (1800.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (1800.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-838467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context calico-838467 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (478.949µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:25:49.761703   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/calico/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/calico/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p calico-838467 -n calico-838467
net_test.go:163: TestNetworkPlugins/group/calico/NetCatPod: showing logs for failed pods as of 2024-09-16 11:40:40.869048742 +0000 UTC m=+4698.715830158
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/calico/NetCatPod (1800.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (1800.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-838467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context enable-default-cni-838467 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (495.257µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
E0916 11:11:06.689514   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:13:05.511559   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:15:02.444841   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:16:06.689523   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:20:02.444052   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:21:06.689550   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:26:06.689787   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:29:45.513488   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:30:02.444523   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:31:06.689464   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:02.444556   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:06.689831   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/enable-default-cni/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/enable-default-cni/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p enable-default-cni-838467 -n enable-default-cni-838467
net_test.go:163: TestNetworkPlugins/group/enable-default-cni/NetCatPod: showing logs for failed pods as of 2024-09-16 11:40:55.111553679 +0000 UTC m=+4712.958335083
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/enable-default-cni/NetCatPod (1800.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (1800.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-838467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context flannel-838467 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (577.574µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:54:58.541011   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:58.547389   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:58.558848   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:58.580265   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:58.621712   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:58.703263   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:58.864807   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:59.186568   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:59.828703   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:01.110066   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:02.443797   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:03.672069   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:08.793502   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:19.035376   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/flannel/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/flannel/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p flannel-838467 -n flannel-838467
net_test.go:163: TestNetworkPlugins/group/flannel/NetCatPod: showing logs for failed pods as of 2024-09-16 12:09:09.607696702 +0000 UTC m=+6407.454478107
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/flannel/NetCatPod (1800.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (1800.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-838467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context bridge-838467 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (588.981µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:55:34.294595   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:39.516893   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/bridge/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/bridge/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p bridge-838467 -n bridge-838467
net_test.go:163: TestNetworkPlugins/group/bridge/NetCatPod: showing logs for failed pods as of 2024-09-16 12:10:27.04341942 +0000 UTC m=+6484.890200832
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/bridge/NetCatPod (1800.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (1800.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-838467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context custom-flannel-838467 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (587.31µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
E0916 11:42:29.763587   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:57.427902   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:57.434340   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:57.445711   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:57.467081   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:57.508471   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:57.589882   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:57.751416   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:58.073098   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:58.715310   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:59.996913   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:43:02.558249   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:43:07.679623   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:43:17.921134   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:329: TestNetworkPlugins/group/custom-flannel/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/custom-flannel/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p custom-flannel-838467 -n custom-flannel-838467
net_test.go:163: TestNetworkPlugins/group/custom-flannel/NetCatPod: showing logs for failed pods as of 2024-09-16 12:11:44.980985634 +0000 UTC m=+6562.827767038
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/custom-flannel/NetCatPod (1800.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-406673 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (684.757µs)
start_stop_delete_test.go:196: kubectl --context old-k8s-version-406673 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-406673
helpers_test.go:235: (dbg) docker inspect old-k8s-version-406673:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b",
	        "Created": "2024-09-16T11:41:15.966557614Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:41:16.106919451Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hosts",
	        "LogPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b-json.log",
	        "Name": "/old-k8s-version-406673",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-406673:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-406673",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-406673",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-406673/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-406673",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eeb5fb104290f5dbbc6dda4f44d1ede524b4eca3b4a1c4e74d210afee339b2c7",
	            "SandboxKey": "/var/run/docker/netns/eeb5fb104290",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-406673": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49cf3e3468396ba01b588ae85b5e7bcdf3e6dcfeb05d207136018542ad1d54df",
	                    "EndpointID": "fd3146eb8ec55f5e8ad65367f8d3d1c86c03f630bbe9fea4a483f6e09022f0f3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-406673",
	                        "28d6c5fc26a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25: (1.113627231s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                            | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                       | custom-flannel-838467     | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:41:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:41:09.129839  333016 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:41:09.130137  333016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:41:09.130147  333016 out.go:358] Setting ErrFile to fd 2...
	I0916 11:41:09.130151  333016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:41:09.130336  333016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:41:09.130914  333016 out.go:352] Setting JSON to false
	I0916 11:41:09.132012  333016 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5009,"bootTime":1726481860,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:41:09.132115  333016 start.go:139] virtualization: kvm guest
	I0916 11:41:07.485553  326192 out.go:235]   - Booting up control plane ...
	I0916 11:41:07.485672  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:41:07.485744  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:41:07.486328  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:41:07.495914  326192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:41:07.501658  326192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:41:07.501769  326192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:41:07.587736  326192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:41:07.587886  326192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:41:08.094403  326192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.791161ms
	I0916 11:41:08.094558  326192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:41:09.134384  333016 out.go:177] * [old-k8s-version-406673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:41:09.136012  333016 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:41:09.136030  333016 notify.go:220] Checking for updates...
	I0916 11:41:09.138120  333016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:41:09.139236  333016 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:41:09.140392  333016 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:41:09.141671  333016 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:41:09.142978  333016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:41:09.144925  333016 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145143  333016 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145276  333016 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145451  333016 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:41:09.170223  333016 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:41:09.170315  333016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:41:09.249446  333016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2024-09-16 11:41:09.232481204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:41:09.249584  333016 docker.go:318] overlay module found
	I0916 11:41:09.251484  333016 out.go:177] * Using the docker driver based on user configuration
	I0916 11:41:09.252770  333016 start.go:297] selected driver: docker
	I0916 11:41:09.252787  333016 start.go:901] validating driver "docker" against <nil>
	I0916 11:41:09.252803  333016 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:41:09.253988  333016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:41:09.311590  333016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2024-09-16 11:41:09.299494045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:41:09.311826  333016 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:41:09.312127  333016 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:41:09.314426  333016 out.go:177] * Using Docker driver with root privileges
	I0916 11:41:09.316047  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:09.316117  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:09.316131  333016 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:41:09.316215  333016 start.go:340] cluster config:
	{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:41:09.318014  333016 out.go:177] * Starting "old-k8s-version-406673" primary control-plane node in "old-k8s-version-406673" cluster
	I0916 11:41:09.319369  333016 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:41:09.320800  333016 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:41:09.322158  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:09.322191  333016 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:41:09.322200  333016 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 11:41:09.322238  333016 cache.go:56] Caching tarball of preloaded images
	I0916 11:41:09.322344  333016 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:41:09.322360  333016 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 11:41:09.322470  333016 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:41:09.322492  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json: {Name:mk5b7a46b7adef06d8ab94be0a464e9f79922d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:41:09.347179  333016 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:41:09.347202  333016 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:41:09.347274  333016 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:41:09.347293  333016 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:41:09.347302  333016 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:41:09.347311  333016 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:41:09.347321  333016 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:41:09.415165  333016 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:41:09.415223  333016 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:41:09.415268  333016 start.go:360] acquireMachinesLock for old-k8s-version-406673: {Name:mk8e16c995170a3c051ae96503b85729d385d06f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:41:09.415392  333016 start.go:364] duration metric: took 100.574µs to acquireMachinesLock for "old-k8s-version-406673"
	I0916 11:41:09.415421  333016 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:41:09.415511  333016 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:41:13.095977  326192 kubeadm.go:310] [api-check] The API server is healthy after 5.001444204s
	I0916 11:41:13.108645  326192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:41:13.124915  326192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:41:13.145729  326192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:41:13.146046  326192 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-838467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:41:13.155883  326192 kubeadm.go:310] [bootstrap-token] Using token: arlmm3.z93mcdj0fcofrw2j
	I0916 11:41:09.417700  333016 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:41:09.418702  333016 start.go:159] libmachine.API.Create for "old-k8s-version-406673" (driver="docker")
	I0916 11:41:09.418758  333016 client.go:168] LocalClient.Create starting
	I0916 11:41:09.418863  333016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:41:09.418984  333016 main.go:141] libmachine: Decoding PEM data...
	I0916 11:41:09.419005  333016 main.go:141] libmachine: Parsing certificate...
	I0916 11:41:09.419062  333016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:41:09.419084  333016 main.go:141] libmachine: Decoding PEM data...
	I0916 11:41:09.419096  333016 main.go:141] libmachine: Parsing certificate...
	I0916 11:41:09.419492  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:41:09.447356  333016 cli_runner.go:211] docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:41:09.447439  333016 network_create.go:284] running [docker network inspect old-k8s-version-406673] to gather additional debugging logs...
	I0916 11:41:09.447459  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673
	W0916 11:41:09.466477  333016 cli_runner.go:211] docker network inspect old-k8s-version-406673 returned with exit code 1
	I0916 11:41:09.466514  333016 network_create.go:287] error running [docker network inspect old-k8s-version-406673]: docker network inspect old-k8s-version-406673: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-406673 not found
	I0916 11:41:09.466528  333016 network_create.go:289] output of [docker network inspect old-k8s-version-406673]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-406673 not found
	
	** /stderr **
	I0916 11:41:09.466624  333016 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:41:09.484833  333016 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:41:09.485829  333016 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:41:09.486598  333016 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:41:09.487223  333016 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:41:09.487906  333016 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:41:09.488504  333016 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:41:09.489409  333016 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002378380}
	I0916 11:41:09.489435  333016 network_create.go:124] attempt to create docker network old-k8s-version-406673 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:41:09.489487  333016 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-406673 old-k8s-version-406673
	I0916 11:41:09.569199  333016 network_create.go:108] docker network old-k8s-version-406673 192.168.103.0/24 created
	I0916 11:41:09.569238  333016 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-406673" container
	I0916 11:41:09.569290  333016 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:41:09.589253  333016 cli_runner.go:164] Run: docker volume create old-k8s-version-406673 --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:41:09.614891  333016 oci.go:103] Successfully created a docker volume old-k8s-version-406673
	I0916 11:41:09.614987  333016 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-406673-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --entrypoint /usr/bin/test -v old-k8s-version-406673:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:41:10.191535  333016 oci.go:107] Successfully prepared a docker volume old-k8s-version-406673
	I0916 11:41:10.191600  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:10.191641  333016 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:41:10.191709  333016 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-406673:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:41:13.157532  326192 out.go:235]   - Configuring RBAC rules ...
	I0916 11:41:13.157708  326192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:41:13.161760  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:41:13.168287  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:41:13.171578  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:41:13.175747  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:41:13.178942  326192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:41:13.556267  326192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:41:14.729155  326192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:41:15.223914  326192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:41:15.225001  326192 kubeadm.go:310] 
	I0916 11:41:15.225130  326192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:41:15.225153  326192 kubeadm.go:310] 
	I0916 11:41:15.225274  326192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:41:15.225295  326192 kubeadm.go:310] 
	I0916 11:41:15.225327  326192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:41:15.225442  326192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:41:15.225506  326192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:41:15.225513  326192 kubeadm.go:310] 
	I0916 11:41:15.225585  326192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:41:15.225594  326192 kubeadm.go:310] 
	I0916 11:41:15.225655  326192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:41:15.225664  326192 kubeadm.go:310] 
	I0916 11:41:15.225726  326192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:41:15.225793  326192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:41:15.225858  326192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:41:15.225864  326192 kubeadm.go:310] 
	I0916 11:41:15.225946  326192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:41:15.226044  326192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:41:15.226052  326192 kubeadm.go:310] 
	I0916 11:41:15.226146  326192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token arlmm3.z93mcdj0fcofrw2j \
	I0916 11:41:15.226292  326192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:41:15.226330  326192 kubeadm.go:310] 	--control-plane 
	I0916 11:41:15.226339  326192 kubeadm.go:310] 
	I0916 11:41:15.226452  326192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:41:15.226462  326192 kubeadm.go:310] 
	I0916 11:41:15.226567  326192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token arlmm3.z93mcdj0fcofrw2j \
	I0916 11:41:15.226726  326192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:41:15.230177  326192 kubeadm.go:310] W0916 11:41:05.103778    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:41:15.230544  326192 kubeadm.go:310] W0916 11:41:05.104714    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:41:15.230854  326192 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:41:15.231019  326192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:41:15.231059  326192 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0916 11:41:15.240253  326192 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0916 11:41:15.886029  333016 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-406673:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.694248034s)
	I0916 11:41:15.886060  333016 kic.go:203] duration metric: took 5.694418556s to extract preloaded images to volume ...
	W0916 11:41:15.886197  333016 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:41:15.886315  333016 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:41:15.946925  333016 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-406673 --name old-k8s-version-406673 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-406673 --network old-k8s-version-406673 --ip 192.168.103.2 --volume old-k8s-version-406673:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:41:16.264153  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Running}}
	I0916 11:41:16.284080  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.304543  333016 cli_runner.go:164] Run: docker exec old-k8s-version-406673 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:41:16.352309  333016 oci.go:144] the created container "old-k8s-version-406673" has a running status.
	I0916 11:41:16.352352  333016 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa...
	I0916 11:41:16.892301  333016 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:41:16.913952  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.935779  333016 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:41:16.935806  333016 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-406673 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:41:16.980961  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.999374  333016 machine.go:93] provisionDockerMachine start ...
	I0916 11:41:16.999449  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.020322  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.020675  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.020700  333016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:41:17.161159  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:41:17.161186  333016 ubuntu.go:169] provisioning hostname "old-k8s-version-406673"
	I0916 11:41:17.161236  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.179941  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.180126  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.180140  333016 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-406673 && echo "old-k8s-version-406673" | sudo tee /etc/hostname
	I0916 11:41:17.325696  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:41:17.325767  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.343273  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.343458  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.343478  333016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-406673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-406673/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-406673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:41:17.481523  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:41:17.481554  333016 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:41:17.481617  333016 ubuntu.go:177] setting up certificates
	I0916 11:41:17.481627  333016 provision.go:84] configureAuth start
	I0916 11:41:17.481677  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:17.501103  333016 provision.go:143] copyHostCerts
	I0916 11:41:17.501181  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:41:17.501192  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:41:17.501278  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:41:17.501418  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:41:17.501433  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:41:17.501476  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:41:17.501610  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:41:17.501622  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:41:17.501659  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:41:17.501734  333016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-406673 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-406673]
	I0916 11:41:17.565274  333016 provision.go:177] copyRemoteCerts
	I0916 11:41:17.565358  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:41:17.565401  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.584534  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:17.682900  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:41:17.707241  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:41:17.730893  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:41:17.754303  333016 provision.go:87] duration metric: took 272.661409ms to configureAuth
	I0916 11:41:17.754331  333016 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:41:17.754493  333016 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:41:17.754609  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.772647  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.772839  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.772862  333016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:41:18.029309  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:41:18.029373  333016 machine.go:96] duration metric: took 1.029938873s to provisionDockerMachine
	I0916 11:41:18.029387  333016 client.go:171] duration metric: took 8.610622274s to LocalClient.Create
	I0916 11:41:18.029411  333016 start.go:167] duration metric: took 8.610712242s to libmachine.API.Create "old-k8s-version-406673"
	I0916 11:41:18.029423  333016 start.go:293] postStartSetup for "old-k8s-version-406673" (driver="docker")
	I0916 11:41:18.029438  333016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:41:18.029502  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:41:18.029565  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.053377  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.151531  333016 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:41:18.155078  333016 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:41:18.155116  333016 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:41:18.155127  333016 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:41:18.155135  333016 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:41:18.155148  333016 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:41:18.155221  333016 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:41:18.155343  333016 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:41:18.155459  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:41:18.164209  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:41:18.188983  333016 start.go:296] duration metric: took 159.545394ms for postStartSetup
	I0916 11:41:18.189414  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:18.208296  333016 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:41:18.208603  333016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:41:18.208646  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.226298  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.318240  333016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:41:18.322605  333016 start.go:128] duration metric: took 8.907078338s to createHost
	I0916 11:41:18.322633  333016 start.go:83] releasing machines lock for "old-k8s-version-406673", held for 8.907228105s
	I0916 11:41:18.322689  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:18.341454  333016 ssh_runner.go:195] Run: cat /version.json
	I0916 11:41:18.341497  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.341552  333016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:41:18.341624  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.361726  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.362565  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.531472  333016 ssh_runner.go:195] Run: systemctl --version
	I0916 11:41:18.535744  333016 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:41:18.683220  333016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:41:18.690107  333016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:41:18.713733  333016 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:41:18.713813  333016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:41:18.747022  333016 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:41:18.747047  333016 start.go:495] detecting cgroup driver to use...
	I0916 11:41:18.747084  333016 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:41:18.747140  333016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:41:18.762745  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:41:18.774503  333016 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:41:18.774568  333016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:41:18.787349  333016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:41:18.801095  333016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:41:18.890378  333016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:41:18.976389  333016 docker.go:233] disabling docker service ...
	I0916 11:41:18.976456  333016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:41:19.000019  333016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:41:19.012839  333016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:41:19.097510  333016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:41:15.242201  326192 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:41:15.242282  326192 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0916 11:41:15.247506  326192 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0916 11:41:15.247546  326192 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0916 11:41:15.272691  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:41:15.900673  326192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:41:15.900751  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:15.900763  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-838467 minikube.k8s.io/updated_at=2024_09_16T11_41_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=custom-flannel-838467 minikube.k8s.io/primary=true
	I0916 11:41:15.909744  326192 ops.go:34] apiserver oom_adj: -16
	I0916 11:41:16.023309  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:16.524490  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:17.023552  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:17.524056  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:18.023739  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:18.523649  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:19.024135  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:19.147138  326192 kubeadm.go:1113] duration metric: took 3.246461505s to wait for elevateKubeSystemPrivileges
	I0916 11:41:19.147176  326192 kubeadm.go:394] duration metric: took 14.233006135s to StartCluster
	I0916 11:41:19.147199  326192 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:19.147270  326192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:41:19.148868  326192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:19.149075  326192 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:41:19.149161  326192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:41:19.149222  326192 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:41:19.149310  326192 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-838467"
	I0916 11:41:19.149329  326192 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-838467"
	I0916 11:41:19.149371  326192 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-838467"
	I0916 11:41:19.149383  326192 host.go:66] Checking if "custom-flannel-838467" exists ...
	I0916 11:41:19.149387  326192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-838467"
	I0916 11:41:19.149454  326192 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:19.149819  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.150001  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.151132  326192 out.go:177] * Verifying Kubernetes components...
	I0916 11:41:19.152474  326192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:19.173524  326192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:19.203214  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:41:19.218863  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:41:19.238609  333016 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 11:41:19.238684  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.250087  333016 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:41:19.250145  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.259354  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.268531  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.279027  333016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:41:19.287949  333016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:41:19.297178  333016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:41:19.307577  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:19.387191  333016 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:41:19.487654  333016 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:41:19.487710  333016 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:41:19.491139  333016 start.go:563] Will wait 60s for crictl version
	I0916 11:41:19.491188  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:19.496116  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:41:19.544501  333016 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:41:19.544576  333016 ssh_runner.go:195] Run: crio --version
	I0916 11:41:19.578771  333016 ssh_runner.go:195] Run: crio --version
	I0916 11:41:19.643731  333016 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0916 11:41:19.173725  326192 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-838467"
	I0916 11:41:19.173990  326192 host.go:66] Checking if "custom-flannel-838467" exists ...
	I0916 11:41:19.174551  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.175324  326192 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:41:19.175346  326192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:41:19.175405  326192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-838467
	I0916 11:41:19.197142  326192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/custom-flannel-838467/id_rsa Username:docker}
	I0916 11:41:19.198430  326192 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:41:19.198462  326192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:41:19.198538  326192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-838467
	I0916 11:41:19.224134  326192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/custom-flannel-838467/id_rsa Username:docker}
	I0916 11:41:19.335865  326192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:41:19.421603  326192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:41:19.422382  326192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:41:19.497244  326192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:41:19.839268  326192 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0916 11:41:20.148001  326192 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-838467" to be "Ready" ...
	I0916 11:41:20.158855  326192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:41:19.645160  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:41:19.661707  333016 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:41:19.665380  333016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:41:19.676415  333016 kubeadm.go:883] updating cluster {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:41:19.676535  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:19.676579  333016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:41:19.742047  333016 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:41:19.742105  333016 ssh_runner.go:195] Run: which lz4
	I0916 11:41:19.745784  333016 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:41:19.749024  333016 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:41:19.749053  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 11:41:20.726623  333016 crio.go:462] duration metric: took 980.877496ms to copy over tarball
	I0916 11:41:20.726707  333016 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:41:23.267869  333016 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.541121164s)
	I0916 11:41:23.267903  333016 crio.go:469] duration metric: took 2.54124645s to extract the tarball
	I0916 11:41:23.267913  333016 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:41:23.340628  333016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:41:23.374342  333016 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:41:23.374368  333016 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:41:23.374427  333016 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.374457  333016 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:41:23.374497  333016 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.374502  333016 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.374514  333016 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.374530  333016 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.374495  333016 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.374427  333016 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:23.375894  333016 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.375896  333016 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.376044  333016 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.375896  333016 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.375906  333016 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.375906  333016 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:41:23.375914  333016 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.375914  333016 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:23.630361  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 11:41:23.660531  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.669314  333016 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:41:23.669405  333016 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:41:23.669458  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.677017  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.679340  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.682602  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.687346  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.706552  333016 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:41:23.706598  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.706602  333016 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.706706  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.733323  333016 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:41:23.733409  333016 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:41:23.733451  333016 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.733496  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.733421  333016 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.733568  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.738018  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.796536  333016 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:41:23.796583  333016 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.796639  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.807990  333016 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:41:23.808034  333016 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.808046  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.808076  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.809979  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.810071  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.810119  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.909741  333016 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:41:23.909838  333016 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.909861  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.909887  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.912887  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.912936  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.920082  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.920254  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.920369  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:24.097891  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.097902  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:24.110265  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:24.110310  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:24.110381  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:24.110394  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:41:24.112528  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:20.160096  326192 addons.go:510] duration metric: took 1.010872416s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:41:20.344573  326192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-838467" context rescaled to 1 replicas
	I0916 11:41:22.152238  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:24.231779  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.231878  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:24.299701  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:41:24.299787  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:41:24.299816  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:24.299863  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:41:24.330660  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:41:24.333761  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.338478  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:41:24.405783  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:41:24.516769  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:24.655351  333016 cache_images.go:92] duration metric: took 1.280968033s to LoadCachedImages
	W0916 11:41:24.655436  333016 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0916 11:41:24.655451  333016 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 crio true true} ...
	I0916 11:41:24.655554  333016 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-406673 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:41:24.655630  333016 ssh_runner.go:195] Run: crio config
	I0916 11:41:24.698372  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:24.698394  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:24.698405  333016 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:41:24.698433  333016 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-406673 NodeName:old-k8s-version-406673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:41:24.698606  333016 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-406673"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:41:24.698743  333016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:41:24.708344  333016 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:41:24.708407  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:41:24.717550  333016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0916 11:41:24.734803  333016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:41:24.752339  333016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0916 11:41:24.769057  333016 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:41:24.772442  333016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:41:24.782978  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:24.858827  333016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:41:24.871739  333016 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673 for IP: 192.168.103.2
	I0916 11:41:24.871765  333016 certs.go:194] generating shared ca certs ...
	I0916 11:41:24.871782  333016 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:24.871958  333016 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:41:24.872020  333016 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:41:24.872037  333016 certs.go:256] generating profile certs ...
	I0916 11:41:24.872110  333016 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key
	I0916 11:41:24.872131  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt with IP's: []
	I0916 11:41:25.048291  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt ...
	I0916 11:41:25.048318  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: {Name:mk4abba6a67f25ef9c59bbcacc5c5dee31e9387f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.048539  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key ...
	I0916 11:41:25.048558  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key: {Name:mk1c39c492dfee9b396f585a47b8783f07fe5103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.048670  333016 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db
	I0916 11:41:25.048688  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:41:25.381754  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db ...
	I0916 11:41:25.381783  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db: {Name:mkba7ece117fcceb2e5dcd2de345d183af279101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.381974  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db ...
	I0916 11:41:25.381991  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db: {Name:mk163caf0f8c6bde6835ea80dd77b20aeeee31cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.382087  333016 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt
	I0916 11:41:25.382180  333016 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key
	I0916 11:41:25.382257  333016 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key
	I0916 11:41:25.382279  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt with IP's: []
	I0916 11:41:25.486866  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt ...
	I0916 11:41:25.486894  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt: {Name:mkcd5e73a62407403f2b7382a6bee9d25e01d246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.487102  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key ...
	I0916 11:41:25.487119  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key: {Name:mk02438bf6f24dc9f1622119085bb7f5eb856e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.487333  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:41:25.487376  333016 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:41:25.487393  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:41:25.487423  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:41:25.487451  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:41:25.487489  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:41:25.487545  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:41:25.488261  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:41:25.513968  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:41:25.538557  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:41:25.562712  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:41:25.585718  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:41:25.611011  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:41:25.636044  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:41:25.670989  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:41:25.696346  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:41:25.726347  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:41:25.751075  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:41:25.774722  333016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:41:25.792779  333016 ssh_runner.go:195] Run: openssl version
	I0916 11:41:25.800733  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:41:25.814085  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.818059  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.818119  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.825641  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:41:25.839273  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:41:25.851228  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.855171  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.855271  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.862163  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:41:25.871484  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:41:25.880429  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.883742  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.883801  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.890371  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:41:25.901843  333016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:41:25.906238  333016 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:41:25.906290  333016 kubeadm.go:392] StartCluster: {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:41:25.906380  333016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:41:25.906433  333016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:41:25.947314  333016 cri.go:89] found id: ""
	I0916 11:41:25.947371  333016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:41:25.956327  333016 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:41:25.965412  333016 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:41:25.965494  333016 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:41:25.974409  333016 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:41:25.974427  333016 kubeadm.go:157] found existing configuration files:
	
	I0916 11:41:25.974464  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:41:25.983428  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:41:25.983491  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:41:25.991673  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:41:26.002161  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:41:26.002229  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:41:26.013896  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:41:26.023373  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:41:26.023434  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:41:26.033671  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:41:26.044330  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:41:26.044397  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:41:26.052990  333016 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:41:26.116552  333016 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:41:26.116953  333016 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:41:26.159382  333016 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:41:26.159511  333016 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:41:26.159572  333016 kubeadm.go:310] OS: Linux
	I0916 11:41:26.159642  333016 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:41:26.159724  333016 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:41:26.159793  333016 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:41:26.159860  333016 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:41:26.159924  333016 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:41:26.159993  333016 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:41:26.160055  333016 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:41:26.160116  333016 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:41:26.255274  333016 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:41:26.255371  333016 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:41:26.255493  333016 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:41:26.457194  333016 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:41:26.460187  333016 out.go:235]   - Generating certificates and keys ...
	I0916 11:41:26.460307  333016 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:41:26.460412  333016 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:41:26.745903  333016 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:41:27.101695  333016 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:41:27.277283  333016 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:41:27.532738  333016 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:41:27.685826  333016 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:41:27.686041  333016 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-406673] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:41:27.949848  333016 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:41:27.950175  333016 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-406673] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:41:28.302029  333016 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:41:28.615418  333016 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:41:28.692846  333016 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:41:28.692963  333016 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:41:28.844556  333016 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:41:28.948784  333016 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:41:29.064396  333016 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:41:24.651896  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:27.152349  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:27.651470  326192 node_ready.go:49] node "custom-flannel-838467" has status "Ready":"True"
	I0916 11:41:27.651491  326192 node_ready.go:38] duration metric: took 7.503462411s for node "custom-flannel-838467" to be "Ready" ...
	I0916 11:41:27.651501  326192 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:41:27.659052  326192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:29.445363  333016 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:41:29.457728  333016 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:41:29.458698  333016 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:41:29.458771  333016 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:41:29.544165  333016 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:41:29.546617  333016 out.go:235]   - Booting up control plane ...
	I0916 11:41:29.546749  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:41:29.552789  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:41:29.553876  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:41:29.554528  333016 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:41:29.556653  333016 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:41:29.665548  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:32.165305  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:34.665436  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:36.665933  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:42.059188  333016 kubeadm.go:310] [apiclient] All control plane components are healthy after 12.502447 seconds
	I0916 11:41:42.059386  333016 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:41:42.071733  333016 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:41:42.590849  333016 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:41:42.591044  333016 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-406673 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0916 11:41:43.098669  333016 kubeadm.go:310] [bootstrap-token] Using token: 24uzd8.f12jm4gfeszy41x7
	I0916 11:41:43.100371  333016 out.go:235]   - Configuring RBAC rules ...
	I0916 11:41:43.100541  333016 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:41:43.104683  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:41:43.111318  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:41:43.113371  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:41:43.115697  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:41:43.118292  333016 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:41:43.126934  333016 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:41:43.360284  333016 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:41:43.516475  333016 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:41:43.517781  333016 kubeadm.go:310] 
	I0916 11:41:43.517878  333016 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:41:43.517889  333016 kubeadm.go:310] 
	I0916 11:41:43.518023  333016 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:41:43.518044  333016 kubeadm.go:310] 
	I0916 11:41:43.518068  333016 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:41:43.518140  333016 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:41:43.518207  333016 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:41:43.518214  333016 kubeadm.go:310] 
	I0916 11:41:43.518276  333016 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:41:43.518282  333016 kubeadm.go:310] 
	I0916 11:41:43.518322  333016 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:41:43.518349  333016 kubeadm.go:310] 
	I0916 11:41:43.518438  333016 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:41:43.518542  333016 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:41:43.518635  333016 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:41:43.518650  333016 kubeadm.go:310] 
	I0916 11:41:43.518802  333016 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:41:43.518905  333016 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:41:43.518915  333016 kubeadm.go:310] 
	I0916 11:41:43.519009  333016 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 24uzd8.f12jm4gfeszy41x7 \
	I0916 11:41:43.519175  333016 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:41:43.519216  333016 kubeadm.go:310]     --control-plane 
	I0916 11:41:43.519226  333016 kubeadm.go:310] 
	I0916 11:41:43.519328  333016 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:41:43.519343  333016 kubeadm.go:310] 
	I0916 11:41:43.519454  333016 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 24uzd8.f12jm4gfeszy41x7 \
	I0916 11:41:43.519608  333016 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:41:43.521710  333016 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:41:43.521904  333016 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:41:43.521936  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:43.521946  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:43.523972  333016 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:41:43.525520  333016 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:41:43.529863  333016 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0916 11:41:43.529889  333016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:41:43.551346  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:41:43.999610  333016 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:41:43.999688  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:43.999735  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-406673 minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=old-k8s-version-406673 minikube.k8s.io/primary=true
	I0916 11:41:44.008244  333016 ops.go:34] apiserver oom_adj: -16
	I0916 11:41:44.110534  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:39.164837  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:41.165886  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:43.167455  326192 pod_ready.go:93] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.167492  326192 pod_ready.go:82] duration metric: took 15.508409943s for pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.167506  326192 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.173572  326192 pod_ready.go:93] pod "etcd-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.173597  326192 pod_ready.go:82] duration metric: took 6.084061ms for pod "etcd-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.173608  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.179725  326192 pod_ready.go:93] pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.179750  326192 pod_ready.go:82] duration metric: took 6.135589ms for pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.179759  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.185203  326192 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.185229  326192 pod_ready.go:82] duration metric: took 5.46328ms for pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.185240  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-4w8bp" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.190735  326192 pod_ready.go:93] pod "kube-proxy-4w8bp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.190759  326192 pod_ready.go:82] duration metric: took 5.51193ms for pod "kube-proxy-4w8bp" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.190771  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.563503  326192 pod_ready.go:93] pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.563527  326192 pod_ready.go:82] duration metric: took 372.750298ms for pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.563545  326192 pod_ready.go:39] duration metric: took 15.912032814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:41:43.563563  326192 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:41:43.563624  326192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:41:43.576500  326192 api_server.go:72] duration metric: took 24.427395386s to wait for apiserver process to appear ...
	I0916 11:41:43.576526  326192 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:41:43.576546  326192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:41:43.580307  326192 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:41:43.581394  326192 api_server.go:141] control plane version: v1.31.1
	I0916 11:41:43.581418  326192 api_server.go:131] duration metric: took 4.885665ms to wait for apiserver health ...
	I0916 11:41:43.581425  326192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:41:43.766131  326192 system_pods.go:59] 7 kube-system pods found
	I0916 11:41:43.766162  326192 system_pods.go:61] "coredns-7c65d6cfc9-v8wnh" [70e55c30-2327-486e-a2f2-45ca826531d5] Running
	I0916 11:41:43.766167  326192 system_pods.go:61] "etcd-custom-flannel-838467" [c47fb50c-7a36-43f2-8b62-a341436839c9] Running
	I0916 11:41:43.766170  326192 system_pods.go:61] "kube-apiserver-custom-flannel-838467" [36053552-7860-4bd5-9898-ffb7ab082a55] Running
	I0916 11:41:43.766174  326192 system_pods.go:61] "kube-controller-manager-custom-flannel-838467" [1b575692-31f1-4a70-be42-76c9439fa88d] Running
	I0916 11:41:43.766178  326192 system_pods.go:61] "kube-proxy-4w8bp" [0aa1010b-96bf-491d-b9ca-f9fb9b9cfbf8] Running
	I0916 11:41:43.766181  326192 system_pods.go:61] "kube-scheduler-custom-flannel-838467" [dc64976a-912d-4ba4-869a-a96a59c28ecd] Running
	I0916 11:41:43.766183  326192 system_pods.go:61] "storage-provisioner" [506055cc-e639-4857-adbc-0c254600538f] Running
	I0916 11:41:43.766191  326192 system_pods.go:74] duration metric: took 184.758722ms to wait for pod list to return data ...
	I0916 11:41:43.766197  326192 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:41:43.964353  326192 default_sa.go:45] found service account: "default"
	I0916 11:41:43.964386  326192 default_sa.go:55] duration metric: took 198.182376ms for default service account to be created ...
	I0916 11:41:43.964400  326192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:41:44.167530  326192 system_pods.go:86] 7 kube-system pods found
	I0916 11:41:44.167574  326192 system_pods.go:89] "coredns-7c65d6cfc9-v8wnh" [70e55c30-2327-486e-a2f2-45ca826531d5] Running
	I0916 11:41:44.167584  326192 system_pods.go:89] "etcd-custom-flannel-838467" [c47fb50c-7a36-43f2-8b62-a341436839c9] Running
	I0916 11:41:44.167591  326192 system_pods.go:89] "kube-apiserver-custom-flannel-838467" [36053552-7860-4bd5-9898-ffb7ab082a55] Running
	I0916 11:41:44.167597  326192 system_pods.go:89] "kube-controller-manager-custom-flannel-838467" [1b575692-31f1-4a70-be42-76c9439fa88d] Running
	I0916 11:41:44.167602  326192 system_pods.go:89] "kube-proxy-4w8bp" [0aa1010b-96bf-491d-b9ca-f9fb9b9cfbf8] Running
	I0916 11:41:44.167608  326192 system_pods.go:89] "kube-scheduler-custom-flannel-838467" [dc64976a-912d-4ba4-869a-a96a59c28ecd] Running
	I0916 11:41:44.167612  326192 system_pods.go:89] "storage-provisioner" [506055cc-e639-4857-adbc-0c254600538f] Running
	I0916 11:41:44.167621  326192 system_pods.go:126] duration metric: took 203.213461ms to wait for k8s-apps to be running ...
	I0916 11:41:44.167631  326192 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:41:44.167685  326192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:41:44.180782  326192 system_svc.go:56] duration metric: took 13.141604ms WaitForService to wait for kubelet
	I0916 11:41:44.180814  326192 kubeadm.go:582] duration metric: took 25.031715543s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:41:44.180838  326192 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:41:44.364740  326192 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:41:44.364769  326192 node_conditions.go:123] node cpu capacity is 8
	I0916 11:41:44.364779  326192 node_conditions.go:105] duration metric: took 183.936169ms to run NodePressure ...
	I0916 11:41:44.364790  326192 start.go:241] waiting for startup goroutines ...
	I0916 11:41:44.364796  326192 start.go:246] waiting for cluster config update ...
	I0916 11:41:44.364805  326192 start.go:255] writing updated cluster config ...
	I0916 11:41:44.365079  326192 ssh_runner.go:195] Run: rm -f paused
	I0916 11:41:44.371879  326192 out.go:177] * Done! kubectl is now configured to use "custom-flannel-838467" cluster and "default" namespace by default
	E0916 11:41:44.373468  326192 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:41:44.611272  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:45.110742  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:45.610915  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:46.110672  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:46.611285  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:47.111092  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:47.610788  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:48.111373  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:48.611189  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:49.110790  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:49.611662  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:50.111045  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:50.611562  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:51.111442  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:51.611212  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:52.111501  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:52.611443  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:53.111633  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:53.611581  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:54.111313  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:54.611583  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:55.111268  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:55.610651  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:56.110600  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:56.610770  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:57.111250  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:57.610984  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:58.111247  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:58.611501  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:59.111271  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:59.611607  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.110881  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.611603  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.717585  333016 kubeadm.go:1113] duration metric: took 16.717955139s to wait for elevateKubeSystemPrivileges
	I0916 11:42:00.717628  333016 kubeadm.go:394] duration metric: took 34.811339511s to StartCluster
	I0916 11:42:00.717650  333016 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:42:00.717734  333016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:42:00.719920  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:42:00.720139  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:42:00.720142  333016 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:42:00.720381  333016 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:42:00.720426  333016 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:42:00.720490  333016 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-406673"
	I0916 11:42:00.720512  333016 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-406673"
	I0916 11:42:00.720537  333016 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:42:00.720922  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.720974  333016 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-406673"
	I0916 11:42:00.721002  333016 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-406673"
	I0916 11:42:00.721279  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.722177  333016 out.go:177] * Verifying Kubernetes components...
	I0916 11:42:00.723934  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:42:00.752502  333016 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-406673"
	I0916 11:42:00.752539  333016 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:42:00.755899  333016 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:42:00.756270  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.757582  333016 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:42:00.757605  333016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:42:00.757662  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:42:00.776137  333016 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:42:00.776158  333016 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:42:00.776215  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:42:00.777250  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:42:00.793326  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:42:01.011292  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:42:01.019742  333016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:42:01.096506  333016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:42:01.120265  333016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:42:01.516905  333016 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:42:01.535935  333016 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:42:01.796472  333016 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:42:01.798178  333016 addons.go:510] duration metric: took 1.077738203s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:42:02.021938  333016 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-406673" context rescaled to 1 replicas
	I0916 11:42:03.540269  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:06.039405  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:08.039450  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:10.578149  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:13.039705  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:15.040491  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:17.539137  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:19.539764  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:22.039970  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:24.539528  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:27.039570  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:29.038931  333016 node_ready.go:49] node "old-k8s-version-406673" has status "Ready":"True"
	I0916 11:42:29.038954  333016 node_ready.go:38] duration metric: took 27.502986487s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:42:29.038963  333016 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:42:29.045578  333016 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:31.049070  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:42:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:42:33.049733  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:42:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:42:35.051703  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:37.552157  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:40.051048  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:40.551252  333016 pod_ready.go:93] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"True"
	I0916 11:42:40.551275  333016 pod_ready.go:82] duration metric: took 11.505673624s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:40.551286  333016 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:42.558047  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:45.057493  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:47.057603  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:49.556869  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:51.557684  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:54.056762  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:56.058223  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:58.557744  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:01.057276  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:03.058237  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:05.557660  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:08.057228  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:10.057485  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:12.556652  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:14.557496  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:17.057859  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:19.058214  333016 pod_ready.go:93] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.058243  333016 pod_ready.go:82] duration metric: took 38.506948862s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.058265  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.063031  333016 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.063055  333016 pod_ready.go:82] duration metric: took 4.781482ms for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.063071  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.069862  333016 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.069881  333016 pod_ready.go:82] duration metric: took 6.802265ms for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.069890  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.074303  333016 pod_ready.go:93] pod "kube-proxy-pcbvp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.074328  333016 pod_ready.go:82] duration metric: took 4.43151ms for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.074338  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.078134  333016 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.078154  333016 pod_ready.go:82] duration metric: took 3.809778ms for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.078164  333016 pod_ready.go:39] duration metric: took 50.039189729s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:43:19.078180  333016 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:43:19.078230  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:19.078279  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:19.114156  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:19.114176  333016 cri.go:89] found id: ""
	I0916 11:43:19.114183  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:19.114235  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.117974  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:19.118035  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:19.152156  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:19.152181  333016 cri.go:89] found id: ""
	I0916 11:43:19.152192  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:19.152246  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.155805  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:19.155863  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:19.190036  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:19.190057  333016 cri.go:89] found id: ""
	I0916 11:43:19.190064  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:19.190111  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.193389  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:19.193445  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:19.227236  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:19.227263  333016 cri.go:89] found id: ""
	I0916 11:43:19.227270  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:19.227325  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.230784  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:19.230843  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:19.264360  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:19.264380  333016 cri.go:89] found id: ""
	I0916 11:43:19.264388  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:19.264437  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.267844  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:19.267916  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:19.300894  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:19.300916  333016 cri.go:89] found id: ""
	I0916 11:43:19.300925  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:19.300982  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.304410  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:19.304463  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:19.338532  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:19.338561  333016 cri.go:89] found id: ""
	I0916 11:43:19.338570  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:19.338617  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.342059  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:19.342087  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:19.375568  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:19.375598  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:19.412566  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:19.412600  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:19.447709  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:19.447738  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:19.485244  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:19.485272  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:19.583549  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:19.583577  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:19.619156  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:19.619188  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:19.664569  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:19.664605  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:19.698129  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:19.698158  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:19.747705  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:19.747738  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:19.798683  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:19.798720  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:19.862046  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:19.862082  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:22.384464  333016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:43:22.396937  333016 api_server.go:72] duration metric: took 1m21.676729889s to wait for apiserver process to appear ...
	I0916 11:43:22.396965  333016 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:43:22.397008  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:22.397062  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:22.430612  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:22.430638  333016 cri.go:89] found id: ""
	I0916 11:43:22.430646  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:22.430694  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.434324  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:22.434382  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:22.469323  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:22.469375  333016 cri.go:89] found id: ""
	I0916 11:43:22.469385  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:22.469455  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.473369  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:22.473438  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:22.507487  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:22.507514  333016 cri.go:89] found id: ""
	I0916 11:43:22.507524  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:22.507610  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.511481  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:22.511553  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:22.546774  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:22.546797  333016 cri.go:89] found id: ""
	I0916 11:43:22.546806  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:22.546854  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.550741  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:22.550815  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:22.584441  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:22.584466  333016 cri.go:89] found id: ""
	I0916 11:43:22.584478  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:22.584518  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.587995  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:22.588052  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:22.621210  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:22.621232  333016 cri.go:89] found id: ""
	I0916 11:43:22.621238  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:22.621288  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.624788  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:22.624860  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:22.659577  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:22.659601  333016 cri.go:89] found id: ""
	I0916 11:43:22.659622  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:22.659672  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.663356  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:22.663381  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:22.759410  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:22.759439  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:22.794834  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:22.794863  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:22.834275  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:22.834316  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:22.868286  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:22.868315  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:22.917081  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:22.917114  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:22.967952  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:22.967987  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:23.027899  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:23.027937  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:23.048542  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:23.048576  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:23.086646  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:23.086676  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:23.122143  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:23.122173  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:23.169305  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:23.169352  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:25.703925  333016 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:43:25.710132  333016 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:43:25.711030  333016 api_server.go:141] control plane version: v1.20.0
	I0916 11:43:25.711051  333016 api_server.go:131] duration metric: took 3.314079399s to wait for apiserver health ...
	I0916 11:43:25.711059  333016 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:43:25.711077  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:25.711124  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:25.744083  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:25.744104  333016 cri.go:89] found id: ""
	I0916 11:43:25.744114  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:25.744169  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.747732  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:25.747806  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:25.780830  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:25.780855  333016 cri.go:89] found id: ""
	I0916 11:43:25.780864  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:25.780905  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.784503  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:25.784565  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:25.819038  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:25.819061  333016 cri.go:89] found id: ""
	I0916 11:43:25.819068  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:25.819116  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.822868  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:25.822952  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:25.857513  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:25.857536  333016 cri.go:89] found id: ""
	I0916 11:43:25.857545  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:25.857604  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.861133  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:25.861199  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:25.895136  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:25.895165  333016 cri.go:89] found id: ""
	I0916 11:43:25.895175  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:25.895233  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.898774  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:25.898849  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:25.932895  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:25.932918  333016 cri.go:89] found id: ""
	I0916 11:43:25.932927  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:25.932981  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.936427  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:25.936488  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:25.972284  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:25.972305  333016 cri.go:89] found id: ""
	I0916 11:43:25.972312  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:25.972351  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.975973  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:25.976004  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:25.996792  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:25.996823  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:26.043167  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:26.043205  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:26.079042  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:26.079070  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:26.116242  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:26.116270  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:26.152271  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:26.152296  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:26.202878  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:26.202913  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:26.264457  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:26.264495  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:26.363604  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:26.363636  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:26.398030  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:26.398055  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:26.431498  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:26.431531  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:26.479671  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:26.479703  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:29.023411  333016 system_pods.go:59] 8 kube-system pods found
	I0916 11:43:29.023440  333016 system_pods.go:61] "coredns-74ff55c5b-6xlgw" [684992a2-7081-4df3-a73e-a21569a28ce6] Running
	I0916 11:43:29.023445  333016 system_pods.go:61] "etcd-old-k8s-version-406673" [d8c0d4cd-1c4a-4881-9f18-d54a4433f8ab] Running
	I0916 11:43:29.023448  333016 system_pods.go:61] "kindnet-mjcgf" [5888dd63-6767-4920-ac13-becf70cd6481] Running
	I0916 11:43:29.023452  333016 system_pods.go:61] "kube-apiserver-old-k8s-version-406673" [00ed1d06-176e-453e-a0bf-29244d78687c] Running
	I0916 11:43:29.023455  333016 system_pods.go:61] "kube-controller-manager-old-k8s-version-406673" [5b6c1595-560a-41d9-b653-9bf2a5c85f67] Running
	I0916 11:43:29.023459  333016 system_pods.go:61] "kube-proxy-pcbvp" [d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1] Running
	I0916 11:43:29.023462  333016 system_pods.go:61] "kube-scheduler-old-k8s-version-406673" [d6f812b4-bf33-454d-8375-fe804f003016] Running
	I0916 11:43:29.023465  333016 system_pods.go:61] "storage-provisioner" [28d14db2-66e4-43f6-8288-4ddc0f3a994c] Running
	I0916 11:43:29.023471  333016 system_pods.go:74] duration metric: took 3.312405641s to wait for pod list to return data ...
	I0916 11:43:29.023478  333016 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:43:29.025649  333016 default_sa.go:45] found service account: "default"
	I0916 11:43:29.025676  333016 default_sa.go:55] duration metric: took 2.190408ms for default service account to be created ...
	I0916 11:43:29.025686  333016 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:43:29.033323  333016 system_pods.go:86] 8 kube-system pods found
	I0916 11:43:29.033381  333016 system_pods.go:89] "coredns-74ff55c5b-6xlgw" [684992a2-7081-4df3-a73e-a21569a28ce6] Running
	I0916 11:43:29.033390  333016 system_pods.go:89] "etcd-old-k8s-version-406673" [d8c0d4cd-1c4a-4881-9f18-d54a4433f8ab] Running
	I0916 11:43:29.033396  333016 system_pods.go:89] "kindnet-mjcgf" [5888dd63-6767-4920-ac13-becf70cd6481] Running
	I0916 11:43:29.033405  333016 system_pods.go:89] "kube-apiserver-old-k8s-version-406673" [00ed1d06-176e-453e-a0bf-29244d78687c] Running
	I0916 11:43:29.033411  333016 system_pods.go:89] "kube-controller-manager-old-k8s-version-406673" [5b6c1595-560a-41d9-b653-9bf2a5c85f67] Running
	I0916 11:43:29.033418  333016 system_pods.go:89] "kube-proxy-pcbvp" [d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1] Running
	I0916 11:43:29.033423  333016 system_pods.go:89] "kube-scheduler-old-k8s-version-406673" [d6f812b4-bf33-454d-8375-fe804f003016] Running
	I0916 11:43:29.033431  333016 system_pods.go:89] "storage-provisioner" [28d14db2-66e4-43f6-8288-4ddc0f3a994c] Running
	I0916 11:43:29.033444  333016 system_pods.go:126] duration metric: took 7.751194ms to wait for k8s-apps to be running ...
	I0916 11:43:29.033457  333016 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:43:29.033512  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:43:29.045813  333016 system_svc.go:56] duration metric: took 12.349678ms WaitForService to wait for kubelet
	I0916 11:43:29.045837  333016 kubeadm.go:582] duration metric: took 1m28.325673057s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:43:29.045852  333016 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:43:29.048437  333016 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:43:29.048464  333016 node_conditions.go:123] node cpu capacity is 8
	I0916 11:43:29.048478  333016 node_conditions.go:105] duration metric: took 2.620808ms to run NodePressure ...
	I0916 11:43:29.048492  333016 start.go:241] waiting for startup goroutines ...
	I0916 11:43:29.048501  333016 start.go:246] waiting for cluster config update ...
	I0916 11:43:29.048515  333016 start.go:255] writing updated cluster config ...
	I0916 11:43:29.048782  333016 ssh_runner.go:195] Run: rm -f paused
	I0916 11:43:29.055620  333016 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-406673" cluster and "default" namespace by default
	E0916 11:43:29.057070  333016 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.856770052Z" level=info msg="Checking pod kube-system_coredns-74ff55c5b-6xlgw for CNI network kindnet (type=ptp)"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.859357089Z" level=info msg="Ran pod sandbox eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32 with infra container: kube-system/storage-provisioner/POD" id=c902770d-194b-4540-8ac4-7301f0545b96 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.859526385Z" level=info msg="Ran pod sandbox 15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33 with infra container: kube-system/coredns-74ff55c5b-6xlgw/POD" id=f576a970-6d7c-4b43-af9e-da0ea0eb3ad3 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860222892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c4d6a36e-e674-485c-a8db-f3ac539a2447 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860278624Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=10f143f9-9fa6-4a76-a15b-32952af72ee1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860399431Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c4d6a36e-e674-485c-a8db-f3ac539a2447 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860422997Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=10f143f9-9fa6-4a76-a15b-32952af72ee1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860976080Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6bb0be3-9f1a-4237-81de-68bd60b184b1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861016518Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=306f539a-c560-4371-8f00-331724f83370 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861171586Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=306f539a-c560-4371-8f00-331724f83370 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861259251Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b6bb0be3-9f1a-4237-81de-68bd60b184b1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862001870Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=eb2e30c4-75e5-4521-ab72-cc7869c1fce1 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862040425Z" level=info msg="Creating container: kube-system/coredns-74ff55c5b-6xlgw/coredns" id=1df64f26-2035-4f6b-95f6-226bec645aec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862080433Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862120024Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.878701585Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1e2bfb353a2952745b9f6b0c04ba55371973020ad1e5e874c5dd82658c63be84/merged/etc/passwd: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.878750411Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1e2bfb353a2952745b9f6b0c04ba55371973020ad1e5e874c5dd82658c63be84/merged/etc/group: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.879154582Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/99e7b464885912b28b588c11a83ff47920ae95ffea4c649719c5189f8ead6e3c/merged/etc/passwd: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.879187538Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/99e7b464885912b28b588c11a83ff47920ae95ffea4c649719c5189f8ead6e3c/merged/etc/group: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.918889133Z" level=info msg="Created container d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0: kube-system/coredns-74ff55c5b-6xlgw/coredns" id=1df64f26-2035-4f6b-95f6-226bec645aec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.919466702Z" level=info msg="Starting container: d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0" id=38ac1fc0-fac9-4a00-8484-820e0b437755 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.922270175Z" level=info msg="Created container 33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d: kube-system/storage-provisioner/storage-provisioner" id=eb2e30c4-75e5-4521-ab72-cc7869c1fce1 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.922845211Z" level=info msg="Starting container: 33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d" id=7db5cf50-ec68-4b78-aebf-9b05d6d07e42 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.926237079Z" level=info msg="Started container" PID=2929 containerID=d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0 description=kube-system/coredns-74ff55c5b-6xlgw/coredns id=38ac1fc0-fac9-4a00-8484-820e0b437755 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.929776182Z" level=info msg="Started container" PID=2936 containerID=33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d description=kube-system/storage-provisioner/storage-provisioner id=7db5cf50-ec68-4b78-aebf-9b05d6d07e42 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4db88b336bed       bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16                                     56 seconds ago       Running             coredns                   0                   15c3605023254       coredns-74ff55c5b-6xlgw
	33a7974b5f09f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     56 seconds ago       Running             storage-provisioner       0                   eee3fde4da330       storage-provisioner
	342a012c428e0       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   About a minute ago   Running             kindnet-cni               0                   3d1945d7b04c2       kindnet-mjcgf
	de3eaebd990dc       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc                                     About a minute ago   Running             kube-proxy                0                   8c9b9fc80cd42       kube-proxy-pcbvp
	6f6e59b67f114       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899                                     About a minute ago   Running             kube-scheduler            0                   dbdf46e21272e       kube-scheduler-old-k8s-version-406673
	31259a2842c01       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99                                     About a minute ago   Running             kube-apiserver            0                   2bf825db35d7b       kube-apiserver-old-k8s-version-406673
	1612fad1a4d07       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                     About a minute ago   Running             etcd                      0                   1eaac4c5376fc       etcd-old-k8s-version-406673
	9aff740155270       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080                                     About a minute ago   Running             kube-controller-manager   0                   483dd0ba7fd68       kube-controller-manager-old-k8s-version-406673
	
	
	==> coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:38442 - 48402 "HINFO IN 8440324266966115617.7448481208015864567. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011622953s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-406673
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-406673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-406673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:41:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-406673
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:43:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-406673
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 318da86b3a3c4fd0827c12705ac51529
	  System UUID:                2d5bda39-09b0-43d0-95f9-1ff418499524
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-6xlgw                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 etcd-old-k8s-version-406673                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         101s
	  kube-system                 kindnet-mjcgf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-old-k8s-version-406673             250m (3%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-old-k8s-version-406673    200m (2%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-pcbvp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-old-k8s-version-406673             100m (1%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 102s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s  kubelet     Node old-k8s-version-406673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                62s   kubelet     Node old-k8s-version-406673 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] <==
	2024-09-16 11:41:36.508821 I | embed: listening for peers on 192.168.103.2:2380
	2024-09-16 11:41:36.508921 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 is starting a new election at term 1
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 became candidate at term 2
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 became leader at term 2
	raft2024/09/16 11:41:37 INFO: raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2
	2024-09-16 11:41:37.294464 I | etcdserver: published {Name:old-k8s-version-406673 ClientURLs:[https://192.168.103.2:2379]} to cluster 3336683c081d149d
	2024-09-16 11:41:37.294487 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-16 11:41:37.294537 I | embed: ready to serve client requests
	2024-09-16 11:41:37.294728 I | embed: ready to serve client requests
	2024-09-16 11:41:37.295159 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-16 11:41:37.296260 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-16 11:41:37.297103 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-16 11:41:37.298217 I | embed: serving client requests on 192.168.103.2:2379
	2024-09-16 11:41:55.011036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:04.397724 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:14.397752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:24.397850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:34.397672 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:44.397732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:54.397786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:04.397868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:14.397710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:24.397875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:43:30 up  1:25,  0 users,  load average: 0.94, 1.11, 0.91
	Linux old-k8s-version-406673 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] <==
	I0916 11:42:05.095640       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:42:05.095656       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:42:05.095674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:42:05.394421       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:42:05.394469       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:42:05.394477       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:42:05.695253       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:42:05.695279       1 metrics.go:61] Registering metrics
	I0916 11:42:05.695331       1 controller.go:374] Syncing nftables rules
	I0916 11:42:15.397552       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:15.397613       1 main.go:299] handling current node
	I0916 11:42:25.398751       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:25.398783       1 main.go:299] handling current node
	I0916 11:42:35.395218       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:35.395262       1 main.go:299] handling current node
	I0916 11:42:45.397419       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:45.397464       1 main.go:299] handling current node
	I0916 11:42:55.402217       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:55.402249       1 main.go:299] handling current node
	I0916 11:43:05.394944       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:05.394981       1 main.go:299] handling current node
	I0916 11:43:15.397437       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:15.397487       1 main.go:299] handling current node
	I0916 11:43:25.397439       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:25.397514       1 main.go:299] handling current node
	
	
	==> kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] <==
	I0916 11:41:41.453400       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0916 11:41:41.453431       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 11:41:41.458485       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0916 11:41:41.461410       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:41:41.461427       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0916 11:41:41.806470       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:41:41.841007       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0916 11:41:41.917086       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:41:41.918224       1 controller.go:606] quota admission added evaluator for: endpoints
	I0916 11:41:41.921847       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:41:42.967364       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0916 11:41:43.351236       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0916 11:41:43.504028       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0916 11:41:48.768075       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:42:00.244433       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:42:00.297844       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0916 11:42:11.190173       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:42:11.190214       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:42:11.190222       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:42:42.093321       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:42:42.093393       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:42:42.093403       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:43:20.270631       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:43:20.270672       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:43:20.270679       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] <==
	I0916 11:42:00.295414       1 shared_informer.go:247] Caches are synced for endpoint 
	I0916 11:42:00.295446       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0916 11:42:00.301750       1 range_allocator.go:373] Set node old-k8s-version-406673 PodCIDR to [10.244.0.0/24]
	I0916 11:42:00.302502       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0916 11:42:00.303655       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pcbvp"
	I0916 11:42:00.303679       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mjcgf"
	I0916 11:42:00.307508       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-406673" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0916 11:42:00.312688       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-q8x49"
	I0916 11:42:00.321152       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-6xlgw"
	I0916 11:42:00.393566       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0916 11:42:00.393684       1 shared_informer.go:247] Caches are synced for HPA 
	I0916 11:42:00.393856       1 shared_informer.go:247] Caches are synced for disruption 
	I0916 11:42:00.393875       1 disruption.go:339] Sending events to api server.
	E0916 11:42:00.408825       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"366e9dff-395f-41eb-aaa4-5fe8a77c24b1", ResourceVersion:"267", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862083703, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014c20c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014c20e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014c2100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2160), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014c2180)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014c21c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0010e7ce0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0005f4238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000430fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00060a638)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0005f4280)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0916 11:42:00.423745       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:42:00.453287       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0916 11:42:00.469890       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:42:00.626879       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0916 11:42:00.895582       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:42:00.895685       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 11:42:00.927104       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:42:01.537394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0916 11:42:01.602983       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-q8x49"
	I0916 11:42:30.295800       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	
	==> kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] <==
	I0916 11:42:00.995500       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:42:00.995590       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:42:01.010731       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:42:01.010826       1 server_others.go:185] Using iptables Proxier.
	I0916 11:42:01.012001       1 server.go:650] Version: v1.20.0
	I0916 11:42:01.013499       1 config.go:315] Starting service config controller
	I0916 11:42:01.013577       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:42:01.013592       1 config.go:224] Starting endpoint slice config controller
	I0916 11:42:01.013614       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:42:01.113797       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:42:01.113806       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] <==
	W0916 11:41:40.476670       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:41:40.476699       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:41:40.476709       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:41:40.476720       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:41:40.516274       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:41:40.516365       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:41:40.516377       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:41:40.516397       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0916 11:41:40.517924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.524733       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:41:40.593689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.593833       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:41:40.594045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:41:40.594338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:41:40.594501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:40.594699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:41:40.594858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:41:40.595116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:41:40.595261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:41:40.595399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:41:41.428933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:41.508045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.594591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.695406       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0916 11:41:44.916550       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.318827    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.395303    2069 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.396094    2069 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: E0916 11:42:00.396471    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495219    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/5888dd63-6767-4920-ac13-becf70cd6481-xtables-lock") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495265    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-kube-proxy") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495307    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-h79b7" (UniqueName: "kubernetes.io/secret/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-kube-proxy-token-h79b7") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495404    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-lib-modules") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495507    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/5888dd63-6767-4920-ac13-becf70cd6481-cni-cfg") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495548    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5888dd63-6767-4920-ac13-becf70cd6481-lib-modules") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495604    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-c5qt9" (UniqueName: "kubernetes.io/secret/5888dd63-6767-4920-ac13-becf70cd6481-kindnet-token-c5qt9") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495632    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-xtables-lock") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: W0916 11:42:00.633660    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-3d1945d7b04c2d25d7a1cc6d0bafc6adce69c9f092118e0e86af68ccc80d1014 WatchSource:0}: Error finding container 3d1945d7b04c2d25d7a1cc6d0bafc6adce69c9f092118e0e86af68ccc80d1014: Status 404 returned error &{%!s(*http.body=&{0xc0009ffd80 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: W0916 11:42:00.640993    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-8c9b9fc80cd428329dc256f5b234864e1037d0a44e37ad7d8aa19e4546d83c7a WatchSource:0}: Error finding container 8c9b9fc80cd428329dc256f5b234864e1037d0a44e37ad7d8aa19e4546d83c7a: Status 404 returned error &{%!s(*http.body=&{0xc000e4daa0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:03 old-k8s-version-406673 kubelet[2069]: E0916 11:42:03.893546    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:08 old-k8s-version-406673 kubelet[2069]: E0916 11:42:08.894227    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:13 old-k8s-version-406673 kubelet[2069]: E0916 11:42:13.894965    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.532522    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.534500    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669791    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-767ft" (UniqueName: "kubernetes.io/secret/28d14db2-66e4-43f6-8288-4ddc0f3a994c-storage-provisioner-token-767ft") pod "storage-provisioner" (UID: "28d14db2-66e4-43f6-8288-4ddc0f3a994c")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669832    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/28d14db2-66e4-43f6-8288-4ddc0f3a994c-tmp") pod "storage-provisioner" (UID: "28d14db2-66e4-43f6-8288-4ddc0f3a994c")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669854    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/684992a2-7081-4df3-a73e-a21569a28ce6-config-volume") pod "coredns-74ff55c5b-6xlgw" (UID: "684992a2-7081-4df3-a73e-a21569a28ce6")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669868    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-75kvx" (UniqueName: "kubernetes.io/secret/684992a2-7081-4df3-a73e-a21569a28ce6-coredns-token-75kvx") pod "coredns-74ff55c5b-6xlgw" (UID: "684992a2-7081-4df3-a73e-a21569a28ce6")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: W0916 11:42:33.858343    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32 WatchSource:0}: Error finding container eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32: Status 404 returned error &{%!s(*http.body=&{0xc0001a8060 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: W0916 11:42:33.859070    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33 WatchSource:0}: Error finding container 15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33: Status 404 returned error &{%!s(*http.body=&{0xc0001b7f60 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	
	
	==> storage-provisioner [33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d] <==
	I0916 11:42:33.942881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:42:33.952289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:42:33.952327       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:42:33.995195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:42:33.995263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88c65391-c353-4f97-bac8-9bd49b9f0588", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77 became leader
	I0916 11:42:33.995326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	I0916 11:42:34.095721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (615.665µs)
helpers_test.go:263: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-406673
helpers_test.go:235: (dbg) docker inspect old-k8s-version-406673:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b",
	        "Created": "2024-09-16T11:41:15.966557614Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:41:16.106919451Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hosts",
	        "LogPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b-json.log",
	        "Name": "/old-k8s-version-406673",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-406673:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-406673",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-406673",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-406673/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-406673",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eeb5fb104290f5dbbc6dda4f44d1ede524b4eca3b4a1c4e74d210afee339b2c7",
	            "SandboxKey": "/var/run/docker/netns/eeb5fb104290",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-406673": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49cf3e3468396ba01b588ae85b5e7bcdf3e6dcfeb05d207136018542ad1d54df",
	                    "EndpointID": "fd3146eb8ec55f5e8ad65367f8d3d1c86c03f630bbe9fea4a483f6e09022f0f3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-406673",
	                        "28d6c5fc26a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25: (1.154974524s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                         | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                            | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                       | custom-flannel-838467     | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:41:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:41:09.129839  333016 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:41:09.130137  333016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:41:09.130147  333016 out.go:358] Setting ErrFile to fd 2...
	I0916 11:41:09.130151  333016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:41:09.130336  333016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:41:09.130914  333016 out.go:352] Setting JSON to false
	I0916 11:41:09.132012  333016 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5009,"bootTime":1726481860,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:41:09.132115  333016 start.go:139] virtualization: kvm guest
	I0916 11:41:07.485553  326192 out.go:235]   - Booting up control plane ...
	I0916 11:41:07.485672  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:41:07.485744  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:41:07.486328  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:41:07.495914  326192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:41:07.501658  326192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:41:07.501769  326192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:41:07.587736  326192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:41:07.587886  326192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:41:08.094403  326192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.791161ms
	I0916 11:41:08.094558  326192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:41:09.134384  333016 out.go:177] * [old-k8s-version-406673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:41:09.136012  333016 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:41:09.136030  333016 notify.go:220] Checking for updates...
	I0916 11:41:09.138120  333016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:41:09.139236  333016 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:41:09.140392  333016 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:41:09.141671  333016 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:41:09.142978  333016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:41:09.144925  333016 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145143  333016 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145276  333016 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145451  333016 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:41:09.170223  333016 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:41:09.170315  333016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:41:09.249446  333016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2024-09-16 11:41:09.232481204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:41:09.249584  333016 docker.go:318] overlay module found
	I0916 11:41:09.251484  333016 out.go:177] * Using the docker driver based on user configuration
	I0916 11:41:09.252770  333016 start.go:297] selected driver: docker
	I0916 11:41:09.252787  333016 start.go:901] validating driver "docker" against <nil>
	I0916 11:41:09.252803  333016 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:41:09.253988  333016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:41:09.311590  333016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2024-09-16 11:41:09.299494045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:41:09.311826  333016 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:41:09.312127  333016 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:41:09.314426  333016 out.go:177] * Using Docker driver with root privileges
	I0916 11:41:09.316047  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:09.316117  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:09.316131  333016 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:41:09.316215  333016 start.go:340] cluster config:
	{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:41:09.318014  333016 out.go:177] * Starting "old-k8s-version-406673" primary control-plane node in "old-k8s-version-406673" cluster
	I0916 11:41:09.319369  333016 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:41:09.320800  333016 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:41:09.322158  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:09.322191  333016 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:41:09.322200  333016 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 11:41:09.322238  333016 cache.go:56] Caching tarball of preloaded images
	I0916 11:41:09.322344  333016 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:41:09.322360  333016 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 11:41:09.322470  333016 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:41:09.322492  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json: {Name:mk5b7a46b7adef06d8ab94be0a464e9f79922d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:41:09.347179  333016 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:41:09.347202  333016 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:41:09.347274  333016 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:41:09.347293  333016 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:41:09.347302  333016 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:41:09.347311  333016 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:41:09.347321  333016 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:41:09.415165  333016 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:41:09.415223  333016 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:41:09.415268  333016 start.go:360] acquireMachinesLock for old-k8s-version-406673: {Name:mk8e16c995170a3c051ae96503b85729d385d06f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:41:09.415392  333016 start.go:364] duration metric: took 100.574µs to acquireMachinesLock for "old-k8s-version-406673"
	I0916 11:41:09.415421  333016 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:41:09.415511  333016 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:41:13.095977  326192 kubeadm.go:310] [api-check] The API server is healthy after 5.001444204s
	I0916 11:41:13.108645  326192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:41:13.124915  326192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:41:13.145729  326192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:41:13.146046  326192 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-838467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:41:13.155883  326192 kubeadm.go:310] [bootstrap-token] Using token: arlmm3.z93mcdj0fcofrw2j
	I0916 11:41:09.417700  333016 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:41:09.418702  333016 start.go:159] libmachine.API.Create for "old-k8s-version-406673" (driver="docker")
	I0916 11:41:09.418758  333016 client.go:168] LocalClient.Create starting
	I0916 11:41:09.418863  333016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:41:09.418984  333016 main.go:141] libmachine: Decoding PEM data...
	I0916 11:41:09.419005  333016 main.go:141] libmachine: Parsing certificate...
	I0916 11:41:09.419062  333016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:41:09.419084  333016 main.go:141] libmachine: Decoding PEM data...
	I0916 11:41:09.419096  333016 main.go:141] libmachine: Parsing certificate...
	I0916 11:41:09.419492  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:41:09.447356  333016 cli_runner.go:211] docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:41:09.447439  333016 network_create.go:284] running [docker network inspect old-k8s-version-406673] to gather additional debugging logs...
	I0916 11:41:09.447459  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673
	W0916 11:41:09.466477  333016 cli_runner.go:211] docker network inspect old-k8s-version-406673 returned with exit code 1
	I0916 11:41:09.466514  333016 network_create.go:287] error running [docker network inspect old-k8s-version-406673]: docker network inspect old-k8s-version-406673: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-406673 not found
	I0916 11:41:09.466528  333016 network_create.go:289] output of [docker network inspect old-k8s-version-406673]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-406673 not found
	
	** /stderr **
	I0916 11:41:09.466624  333016 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:41:09.484833  333016 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:41:09.485829  333016 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:41:09.486598  333016 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:41:09.487223  333016 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:41:09.487906  333016 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:41:09.488504  333016 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:41:09.489409  333016 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002378380}
	I0916 11:41:09.489435  333016 network_create.go:124] attempt to create docker network old-k8s-version-406673 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:41:09.489487  333016 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-406673 old-k8s-version-406673
	I0916 11:41:09.569199  333016 network_create.go:108] docker network old-k8s-version-406673 192.168.103.0/24 created
	I0916 11:41:09.569238  333016 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-406673" container
	I0916 11:41:09.569290  333016 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:41:09.589253  333016 cli_runner.go:164] Run: docker volume create old-k8s-version-406673 --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:41:09.614891  333016 oci.go:103] Successfully created a docker volume old-k8s-version-406673
	I0916 11:41:09.614987  333016 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-406673-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --entrypoint /usr/bin/test -v old-k8s-version-406673:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:41:10.191535  333016 oci.go:107] Successfully prepared a docker volume old-k8s-version-406673
	I0916 11:41:10.191600  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:10.191641  333016 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:41:10.191709  333016 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-406673:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:41:13.157532  326192 out.go:235]   - Configuring RBAC rules ...
	I0916 11:41:13.157708  326192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:41:13.161760  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:41:13.168287  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:41:13.171578  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:41:13.175747  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:41:13.178942  326192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:41:13.556267  326192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:41:14.729155  326192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:41:15.223914  326192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:41:15.225001  326192 kubeadm.go:310] 
	I0916 11:41:15.225130  326192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:41:15.225153  326192 kubeadm.go:310] 
	I0916 11:41:15.225274  326192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:41:15.225295  326192 kubeadm.go:310] 
	I0916 11:41:15.225327  326192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:41:15.225442  326192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:41:15.225506  326192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:41:15.225513  326192 kubeadm.go:310] 
	I0916 11:41:15.225585  326192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:41:15.225594  326192 kubeadm.go:310] 
	I0916 11:41:15.225655  326192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:41:15.225664  326192 kubeadm.go:310] 
	I0916 11:41:15.225726  326192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:41:15.225793  326192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:41:15.225858  326192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:41:15.225864  326192 kubeadm.go:310] 
	I0916 11:41:15.225946  326192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:41:15.226044  326192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:41:15.226052  326192 kubeadm.go:310] 
	I0916 11:41:15.226146  326192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token arlmm3.z93mcdj0fcofrw2j \
	I0916 11:41:15.226292  326192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:41:15.226330  326192 kubeadm.go:310] 	--control-plane 
	I0916 11:41:15.226339  326192 kubeadm.go:310] 
	I0916 11:41:15.226452  326192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:41:15.226462  326192 kubeadm.go:310] 
	I0916 11:41:15.226567  326192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token arlmm3.z93mcdj0fcofrw2j \
	I0916 11:41:15.226726  326192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:41:15.230177  326192 kubeadm.go:310] W0916 11:41:05.103778    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:41:15.230544  326192 kubeadm.go:310] W0916 11:41:05.104714    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:41:15.230854  326192 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:41:15.231019  326192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:41:15.231059  326192 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0916 11:41:15.240253  326192 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0916 11:41:15.886029  333016 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-406673:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.694248034s)
	I0916 11:41:15.886060  333016 kic.go:203] duration metric: took 5.694418556s to extract preloaded images to volume ...
	W0916 11:41:15.886197  333016 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:41:15.886315  333016 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:41:15.946925  333016 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-406673 --name old-k8s-version-406673 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-406673 --network old-k8s-version-406673 --ip 192.168.103.2 --volume old-k8s-version-406673:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:41:16.264153  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Running}}
	I0916 11:41:16.284080  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.304543  333016 cli_runner.go:164] Run: docker exec old-k8s-version-406673 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:41:16.352309  333016 oci.go:144] the created container "old-k8s-version-406673" has a running status.
	I0916 11:41:16.352352  333016 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa...
	I0916 11:41:16.892301  333016 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:41:16.913952  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.935779  333016 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:41:16.935806  333016 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-406673 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:41:16.980961  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.999374  333016 machine.go:93] provisionDockerMachine start ...
	I0916 11:41:16.999449  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.020322  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.020675  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.020700  333016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:41:17.161159  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:41:17.161186  333016 ubuntu.go:169] provisioning hostname "old-k8s-version-406673"
	I0916 11:41:17.161236  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.179941  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.180126  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.180140  333016 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-406673 && echo "old-k8s-version-406673" | sudo tee /etc/hostname
	I0916 11:41:17.325696  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:41:17.325767  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.343273  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.343458  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.343478  333016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-406673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-406673/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-406673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:41:17.481523  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:41:17.481554  333016 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:41:17.481617  333016 ubuntu.go:177] setting up certificates
	I0916 11:41:17.481627  333016 provision.go:84] configureAuth start
	I0916 11:41:17.481677  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:17.501103  333016 provision.go:143] copyHostCerts
	I0916 11:41:17.501181  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:41:17.501192  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:41:17.501278  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:41:17.501418  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:41:17.501433  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:41:17.501476  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:41:17.501610  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:41:17.501622  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:41:17.501659  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:41:17.501734  333016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-406673 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-406673]
	I0916 11:41:17.565274  333016 provision.go:177] copyRemoteCerts
	I0916 11:41:17.565358  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:41:17.565401  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.584534  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:17.682900  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:41:17.707241  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:41:17.730893  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:41:17.754303  333016 provision.go:87] duration metric: took 272.661409ms to configureAuth
	I0916 11:41:17.754331  333016 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:41:17.754493  333016 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:41:17.754609  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.772647  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.772839  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.772862  333016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:41:18.029309  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:41:18.029373  333016 machine.go:96] duration metric: took 1.029938873s to provisionDockerMachine
	I0916 11:41:18.029387  333016 client.go:171] duration metric: took 8.610622274s to LocalClient.Create
	I0916 11:41:18.029411  333016 start.go:167] duration metric: took 8.610712242s to libmachine.API.Create "old-k8s-version-406673"
	I0916 11:41:18.029423  333016 start.go:293] postStartSetup for "old-k8s-version-406673" (driver="docker")
	I0916 11:41:18.029438  333016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:41:18.029502  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:41:18.029565  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.053377  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.151531  333016 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:41:18.155078  333016 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:41:18.155116  333016 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:41:18.155127  333016 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:41:18.155135  333016 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:41:18.155148  333016 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:41:18.155221  333016 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:41:18.155343  333016 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:41:18.155459  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:41:18.164209  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:41:18.188983  333016 start.go:296] duration metric: took 159.545394ms for postStartSetup
	I0916 11:41:18.189414  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:18.208296  333016 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:41:18.208603  333016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:41:18.208646  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.226298  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.318240  333016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:41:18.322605  333016 start.go:128] duration metric: took 8.907078338s to createHost
	I0916 11:41:18.322633  333016 start.go:83] releasing machines lock for "old-k8s-version-406673", held for 8.907228105s
	I0916 11:41:18.322689  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:18.341454  333016 ssh_runner.go:195] Run: cat /version.json
	I0916 11:41:18.341497  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.341552  333016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:41:18.341624  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.361726  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.362565  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.531472  333016 ssh_runner.go:195] Run: systemctl --version
	I0916 11:41:18.535744  333016 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:41:18.683220  333016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:41:18.690107  333016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:41:18.713733  333016 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:41:18.713813  333016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:41:18.747022  333016 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:41:18.747047  333016 start.go:495] detecting cgroup driver to use...
	I0916 11:41:18.747084  333016 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:41:18.747140  333016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:41:18.762745  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:41:18.774503  333016 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:41:18.774568  333016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:41:18.787349  333016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:41:18.801095  333016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:41:18.890378  333016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:41:18.976389  333016 docker.go:233] disabling docker service ...
	I0916 11:41:18.976456  333016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:41:19.000019  333016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:41:19.012839  333016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:41:19.097510  333016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:41:15.242201  326192 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:41:15.242282  326192 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0916 11:41:15.247506  326192 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0916 11:41:15.247546  326192 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0916 11:41:15.272691  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:41:15.900673  326192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:41:15.900751  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:15.900763  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-838467 minikube.k8s.io/updated_at=2024_09_16T11_41_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=custom-flannel-838467 minikube.k8s.io/primary=true
	I0916 11:41:15.909744  326192 ops.go:34] apiserver oom_adj: -16
	I0916 11:41:16.023309  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:16.524490  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:17.023552  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:17.524056  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:18.023739  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:18.523649  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:19.024135  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:19.147138  326192 kubeadm.go:1113] duration metric: took 3.246461505s to wait for elevateKubeSystemPrivileges
	I0916 11:41:19.147176  326192 kubeadm.go:394] duration metric: took 14.233006135s to StartCluster
	I0916 11:41:19.147199  326192 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:19.147270  326192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:41:19.148868  326192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:19.149075  326192 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:41:19.149161  326192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:41:19.149222  326192 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:41:19.149310  326192 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-838467"
	I0916 11:41:19.149329  326192 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-838467"
	I0916 11:41:19.149371  326192 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-838467"
	I0916 11:41:19.149383  326192 host.go:66] Checking if "custom-flannel-838467" exists ...
	I0916 11:41:19.149387  326192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-838467"
	I0916 11:41:19.149454  326192 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:19.149819  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.150001  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.151132  326192 out.go:177] * Verifying Kubernetes components...
	I0916 11:41:19.152474  326192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:19.173524  326192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:19.203214  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:41:19.218863  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:41:19.238609  333016 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 11:41:19.238684  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.250087  333016 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:41:19.250145  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.259354  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.268531  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.279027  333016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:41:19.287949  333016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:41:19.297178  333016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:41:19.307577  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:19.387191  333016 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:41:19.487654  333016 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:41:19.487710  333016 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:41:19.491139  333016 start.go:563] Will wait 60s for crictl version
	I0916 11:41:19.491188  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:19.496116  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:41:19.544501  333016 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:41:19.544576  333016 ssh_runner.go:195] Run: crio --version
	I0916 11:41:19.578771  333016 ssh_runner.go:195] Run: crio --version
	I0916 11:41:19.643731  333016 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0916 11:41:19.173725  326192 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-838467"
	I0916 11:41:19.173990  326192 host.go:66] Checking if "custom-flannel-838467" exists ...
	I0916 11:41:19.174551  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.175324  326192 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:41:19.175346  326192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:41:19.175405  326192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-838467
	I0916 11:41:19.197142  326192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/custom-flannel-838467/id_rsa Username:docker}
	I0916 11:41:19.198430  326192 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:41:19.198462  326192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:41:19.198538  326192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-838467
	I0916 11:41:19.224134  326192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/custom-flannel-838467/id_rsa Username:docker}
	I0916 11:41:19.335865  326192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:41:19.421603  326192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:41:19.422382  326192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:41:19.497244  326192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:41:19.839268  326192 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0916 11:41:20.148001  326192 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-838467" to be "Ready" ...
	I0916 11:41:20.158855  326192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:41:19.645160  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:41:19.661707  333016 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:41:19.665380  333016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:41:19.676415  333016 kubeadm.go:883] updating cluster {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:41:19.676535  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:19.676579  333016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:41:19.742047  333016 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:41:19.742105  333016 ssh_runner.go:195] Run: which lz4
	I0916 11:41:19.745784  333016 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:41:19.749024  333016 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:41:19.749053  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 11:41:20.726623  333016 crio.go:462] duration metric: took 980.877496ms to copy over tarball
	I0916 11:41:20.726707  333016 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:41:23.267869  333016 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.541121164s)
	I0916 11:41:23.267903  333016 crio.go:469] duration metric: took 2.54124645s to extract the tarball
	I0916 11:41:23.267913  333016 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:41:23.340628  333016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:41:23.374342  333016 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:41:23.374368  333016 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:41:23.374427  333016 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.374457  333016 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:41:23.374497  333016 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.374502  333016 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.374514  333016 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.374530  333016 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.374495  333016 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.374427  333016 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:23.375894  333016 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.375896  333016 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.376044  333016 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.375896  333016 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.375906  333016 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.375906  333016 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:41:23.375914  333016 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.375914  333016 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:23.630361  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 11:41:23.660531  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.669314  333016 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:41:23.669405  333016 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:41:23.669458  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.677017  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.679340  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.682602  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.687346  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.706552  333016 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:41:23.706598  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.706602  333016 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.706706  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.733323  333016 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:41:23.733409  333016 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:41:23.733451  333016 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.733496  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.733421  333016 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.733568  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.738018  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.796536  333016 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:41:23.796583  333016 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.796639  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.807990  333016 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:41:23.808034  333016 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.808046  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.808076  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.809979  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.810071  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.810119  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.909741  333016 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:41:23.909838  333016 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.909861  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.909887  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.912887  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.912936  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.920082  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.920254  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.920369  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:24.097891  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.097902  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:24.110265  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:24.110310  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:24.110381  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:24.110394  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:41:24.112528  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:20.160096  326192 addons.go:510] duration metric: took 1.010872416s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:41:20.344573  326192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-838467" context rescaled to 1 replicas
	I0916 11:41:22.152238  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:24.231779  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.231878  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:24.299701  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:41:24.299787  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:41:24.299816  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:24.299863  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:41:24.330660  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:41:24.333761  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.338478  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:41:24.405783  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:41:24.516769  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:24.655351  333016 cache_images.go:92] duration metric: took 1.280968033s to LoadCachedImages
	W0916 11:41:24.655436  333016 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0916 11:41:24.655451  333016 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 crio true true} ...
	I0916 11:41:24.655554  333016 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-406673 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:41:24.655630  333016 ssh_runner.go:195] Run: crio config
	I0916 11:41:24.698372  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:24.698394  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:24.698405  333016 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:41:24.698433  333016 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-406673 NodeName:old-k8s-version-406673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:41:24.698606  333016 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-406673"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:41:24.698743  333016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:41:24.708344  333016 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:41:24.708407  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:41:24.717550  333016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0916 11:41:24.734803  333016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:41:24.752339  333016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0916 11:41:24.769057  333016 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:41:24.772442  333016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:41:24.782978  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:24.858827  333016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:41:24.871739  333016 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673 for IP: 192.168.103.2
	I0916 11:41:24.871765  333016 certs.go:194] generating shared ca certs ...
	I0916 11:41:24.871782  333016 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:24.871958  333016 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:41:24.872020  333016 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:41:24.872037  333016 certs.go:256] generating profile certs ...
	I0916 11:41:24.872110  333016 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key
	I0916 11:41:24.872131  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt with IP's: []
	I0916 11:41:25.048291  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt ...
	I0916 11:41:25.048318  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: {Name:mk4abba6a67f25ef9c59bbcacc5c5dee31e9387f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.048539  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key ...
	I0916 11:41:25.048558  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key: {Name:mk1c39c492dfee9b396f585a47b8783f07fe5103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.048670  333016 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db
	I0916 11:41:25.048688  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:41:25.381754  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db ...
	I0916 11:41:25.381783  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db: {Name:mkba7ece117fcceb2e5dcd2de345d183af279101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.381974  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db ...
	I0916 11:41:25.381991  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db: {Name:mk163caf0f8c6bde6835ea80dd77b20aeeee31cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.382087  333016 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt
	I0916 11:41:25.382180  333016 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key
	I0916 11:41:25.382257  333016 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key
	I0916 11:41:25.382279  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt with IP's: []
	I0916 11:41:25.486866  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt ...
	I0916 11:41:25.486894  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt: {Name:mkcd5e73a62407403f2b7382a6bee9d25e01d246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.487102  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key ...
	I0916 11:41:25.487119  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key: {Name:mk02438bf6f24dc9f1622119085bb7f5eb856e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.487333  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:41:25.487376  333016 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:41:25.487393  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:41:25.487423  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:41:25.487451  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:41:25.487489  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:41:25.487545  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:41:25.488261  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:41:25.513968  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:41:25.538557  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:41:25.562712  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:41:25.585718  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:41:25.611011  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:41:25.636044  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:41:25.670989  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:41:25.696346  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:41:25.726347  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:41:25.751075  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:41:25.774722  333016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:41:25.792779  333016 ssh_runner.go:195] Run: openssl version
	I0916 11:41:25.800733  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:41:25.814085  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.818059  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.818119  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.825641  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:41:25.839273  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:41:25.851228  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.855171  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.855271  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.862163  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:41:25.871484  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:41:25.880429  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.883742  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.883801  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.890371  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:41:25.901843  333016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:41:25.906238  333016 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:41:25.906290  333016 kubeadm.go:392] StartCluster: {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:41:25.906380  333016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:41:25.906433  333016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:41:25.947314  333016 cri.go:89] found id: ""
	I0916 11:41:25.947371  333016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:41:25.956327  333016 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:41:25.965412  333016 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:41:25.965494  333016 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:41:25.974409  333016 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:41:25.974427  333016 kubeadm.go:157] found existing configuration files:
	
	I0916 11:41:25.974464  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:41:25.983428  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:41:25.983491  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:41:25.991673  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:41:26.002161  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:41:26.002229  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:41:26.013896  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:41:26.023373  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:41:26.023434  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:41:26.033671  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:41:26.044330  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:41:26.044397  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:41:26.052990  333016 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:41:26.116552  333016 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:41:26.116953  333016 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:41:26.159382  333016 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:41:26.159511  333016 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:41:26.159572  333016 kubeadm.go:310] OS: Linux
	I0916 11:41:26.159642  333016 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:41:26.159724  333016 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:41:26.159793  333016 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:41:26.159860  333016 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:41:26.159924  333016 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:41:26.159993  333016 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:41:26.160055  333016 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:41:26.160116  333016 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:41:26.255274  333016 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:41:26.255371  333016 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:41:26.255493  333016 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:41:26.457194  333016 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:41:26.460187  333016 out.go:235]   - Generating certificates and keys ...
	I0916 11:41:26.460307  333016 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:41:26.460412  333016 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:41:26.745903  333016 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:41:27.101695  333016 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:41:27.277283  333016 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:41:27.532738  333016 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:41:27.685826  333016 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:41:27.686041  333016 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-406673] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:41:27.949848  333016 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:41:27.950175  333016 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-406673] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:41:28.302029  333016 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:41:28.615418  333016 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:41:28.692846  333016 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:41:28.692963  333016 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:41:28.844556  333016 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:41:28.948784  333016 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:41:29.064396  333016 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:41:24.651896  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:27.152349  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:27.651470  326192 node_ready.go:49] node "custom-flannel-838467" has status "Ready":"True"
	I0916 11:41:27.651491  326192 node_ready.go:38] duration metric: took 7.503462411s for node "custom-flannel-838467" to be "Ready" ...
	I0916 11:41:27.651501  326192 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:41:27.659052  326192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:29.445363  333016 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:41:29.457728  333016 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:41:29.458698  333016 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:41:29.458771  333016 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:41:29.544165  333016 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:41:29.546617  333016 out.go:235]   - Booting up control plane ...
	I0916 11:41:29.546749  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:41:29.552789  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:41:29.553876  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:41:29.554528  333016 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:41:29.556653  333016 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:41:29.665548  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:32.165305  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:34.665436  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:36.665933  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:42.059188  333016 kubeadm.go:310] [apiclient] All control plane components are healthy after 12.502447 seconds
	I0916 11:41:42.059386  333016 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:41:42.071733  333016 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:41:42.590849  333016 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:41:42.591044  333016 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-406673 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0916 11:41:43.098669  333016 kubeadm.go:310] [bootstrap-token] Using token: 24uzd8.f12jm4gfeszy41x7
	I0916 11:41:43.100371  333016 out.go:235]   - Configuring RBAC rules ...
	I0916 11:41:43.100541  333016 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:41:43.104683  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:41:43.111318  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:41:43.113371  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:41:43.115697  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:41:43.118292  333016 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:41:43.126934  333016 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:41:43.360284  333016 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:41:43.516475  333016 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:41:43.517781  333016 kubeadm.go:310] 
	I0916 11:41:43.517878  333016 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:41:43.517889  333016 kubeadm.go:310] 
	I0916 11:41:43.518023  333016 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:41:43.518044  333016 kubeadm.go:310] 
	I0916 11:41:43.518068  333016 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:41:43.518140  333016 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:41:43.518207  333016 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:41:43.518214  333016 kubeadm.go:310] 
	I0916 11:41:43.518276  333016 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:41:43.518282  333016 kubeadm.go:310] 
	I0916 11:41:43.518322  333016 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:41:43.518349  333016 kubeadm.go:310] 
	I0916 11:41:43.518438  333016 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:41:43.518542  333016 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:41:43.518635  333016 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:41:43.518650  333016 kubeadm.go:310] 
	I0916 11:41:43.518802  333016 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:41:43.518905  333016 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:41:43.518915  333016 kubeadm.go:310] 
	I0916 11:41:43.519009  333016 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 24uzd8.f12jm4gfeszy41x7 \
	I0916 11:41:43.519175  333016 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:41:43.519216  333016 kubeadm.go:310]     --control-plane 
	I0916 11:41:43.519226  333016 kubeadm.go:310] 
	I0916 11:41:43.519328  333016 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:41:43.519343  333016 kubeadm.go:310] 
	I0916 11:41:43.519454  333016 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 24uzd8.f12jm4gfeszy41x7 \
	I0916 11:41:43.519608  333016 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:41:43.521710  333016 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:41:43.521904  333016 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:41:43.521936  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:43.521946  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:43.523972  333016 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:41:43.525520  333016 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:41:43.529863  333016 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0916 11:41:43.529889  333016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:41:43.551346  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:41:43.999610  333016 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:41:43.999688  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:43.999735  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-406673 minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=old-k8s-version-406673 minikube.k8s.io/primary=true
	I0916 11:41:44.008244  333016 ops.go:34] apiserver oom_adj: -16
	I0916 11:41:44.110534  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:39.164837  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:41.165886  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:43.167455  326192 pod_ready.go:93] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.167492  326192 pod_ready.go:82] duration metric: took 15.508409943s for pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.167506  326192 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.173572  326192 pod_ready.go:93] pod "etcd-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.173597  326192 pod_ready.go:82] duration metric: took 6.084061ms for pod "etcd-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.173608  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.179725  326192 pod_ready.go:93] pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.179750  326192 pod_ready.go:82] duration metric: took 6.135589ms for pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.179759  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.185203  326192 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.185229  326192 pod_ready.go:82] duration metric: took 5.46328ms for pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.185240  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-4w8bp" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.190735  326192 pod_ready.go:93] pod "kube-proxy-4w8bp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.190759  326192 pod_ready.go:82] duration metric: took 5.51193ms for pod "kube-proxy-4w8bp" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.190771  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.563503  326192 pod_ready.go:93] pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.563527  326192 pod_ready.go:82] duration metric: took 372.750298ms for pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.563545  326192 pod_ready.go:39] duration metric: took 15.912032814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:41:43.563563  326192 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:41:43.563624  326192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:41:43.576500  326192 api_server.go:72] duration metric: took 24.427395386s to wait for apiserver process to appear ...
	I0916 11:41:43.576526  326192 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:41:43.576546  326192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:41:43.580307  326192 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:41:43.581394  326192 api_server.go:141] control plane version: v1.31.1
	I0916 11:41:43.581418  326192 api_server.go:131] duration metric: took 4.885665ms to wait for apiserver health ...
	I0916 11:41:43.581425  326192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:41:43.766131  326192 system_pods.go:59] 7 kube-system pods found
	I0916 11:41:43.766162  326192 system_pods.go:61] "coredns-7c65d6cfc9-v8wnh" [70e55c30-2327-486e-a2f2-45ca826531d5] Running
	I0916 11:41:43.766167  326192 system_pods.go:61] "etcd-custom-flannel-838467" [c47fb50c-7a36-43f2-8b62-a341436839c9] Running
	I0916 11:41:43.766170  326192 system_pods.go:61] "kube-apiserver-custom-flannel-838467" [36053552-7860-4bd5-9898-ffb7ab082a55] Running
	I0916 11:41:43.766174  326192 system_pods.go:61] "kube-controller-manager-custom-flannel-838467" [1b575692-31f1-4a70-be42-76c9439fa88d] Running
	I0916 11:41:43.766178  326192 system_pods.go:61] "kube-proxy-4w8bp" [0aa1010b-96bf-491d-b9ca-f9fb9b9cfbf8] Running
	I0916 11:41:43.766181  326192 system_pods.go:61] "kube-scheduler-custom-flannel-838467" [dc64976a-912d-4ba4-869a-a96a59c28ecd] Running
	I0916 11:41:43.766183  326192 system_pods.go:61] "storage-provisioner" [506055cc-e639-4857-adbc-0c254600538f] Running
	I0916 11:41:43.766191  326192 system_pods.go:74] duration metric: took 184.758722ms to wait for pod list to return data ...
	I0916 11:41:43.766197  326192 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:41:43.964353  326192 default_sa.go:45] found service account: "default"
	I0916 11:41:43.964386  326192 default_sa.go:55] duration metric: took 198.182376ms for default service account to be created ...
	I0916 11:41:43.964400  326192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:41:44.167530  326192 system_pods.go:86] 7 kube-system pods found
	I0916 11:41:44.167574  326192 system_pods.go:89] "coredns-7c65d6cfc9-v8wnh" [70e55c30-2327-486e-a2f2-45ca826531d5] Running
	I0916 11:41:44.167584  326192 system_pods.go:89] "etcd-custom-flannel-838467" [c47fb50c-7a36-43f2-8b62-a341436839c9] Running
	I0916 11:41:44.167591  326192 system_pods.go:89] "kube-apiserver-custom-flannel-838467" [36053552-7860-4bd5-9898-ffb7ab082a55] Running
	I0916 11:41:44.167597  326192 system_pods.go:89] "kube-controller-manager-custom-flannel-838467" [1b575692-31f1-4a70-be42-76c9439fa88d] Running
	I0916 11:41:44.167602  326192 system_pods.go:89] "kube-proxy-4w8bp" [0aa1010b-96bf-491d-b9ca-f9fb9b9cfbf8] Running
	I0916 11:41:44.167608  326192 system_pods.go:89] "kube-scheduler-custom-flannel-838467" [dc64976a-912d-4ba4-869a-a96a59c28ecd] Running
	I0916 11:41:44.167612  326192 system_pods.go:89] "storage-provisioner" [506055cc-e639-4857-adbc-0c254600538f] Running
	I0916 11:41:44.167621  326192 system_pods.go:126] duration metric: took 203.213461ms to wait for k8s-apps to be running ...
	I0916 11:41:44.167631  326192 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:41:44.167685  326192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:41:44.180782  326192 system_svc.go:56] duration metric: took 13.141604ms WaitForService to wait for kubelet
	I0916 11:41:44.180814  326192 kubeadm.go:582] duration metric: took 25.031715543s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:41:44.180838  326192 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:41:44.364740  326192 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:41:44.364769  326192 node_conditions.go:123] node cpu capacity is 8
	I0916 11:41:44.364779  326192 node_conditions.go:105] duration metric: took 183.936169ms to run NodePressure ...
	I0916 11:41:44.364790  326192 start.go:241] waiting for startup goroutines ...
	I0916 11:41:44.364796  326192 start.go:246] waiting for cluster config update ...
	I0916 11:41:44.364805  326192 start.go:255] writing updated cluster config ...
	I0916 11:41:44.365079  326192 ssh_runner.go:195] Run: rm -f paused
	I0916 11:41:44.371879  326192 out.go:177] * Done! kubectl is now configured to use "custom-flannel-838467" cluster and "default" namespace by default
	E0916 11:41:44.373468  326192 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:41:44.611272  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:45.110742  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:45.610915  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:46.110672  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:46.611285  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:47.111092  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:47.610788  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:48.111373  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:48.611189  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:49.110790  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:49.611662  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:50.111045  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:50.611562  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:51.111442  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:51.611212  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:52.111501  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:52.611443  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:53.111633  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:53.611581  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:54.111313  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:54.611583  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:55.111268  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:55.610651  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:56.110600  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:56.610770  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:57.111250  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:57.610984  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:58.111247  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:58.611501  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:59.111271  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:59.611607  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.110881  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.611603  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.717585  333016 kubeadm.go:1113] duration metric: took 16.717955139s to wait for elevateKubeSystemPrivileges
	I0916 11:42:00.717628  333016 kubeadm.go:394] duration metric: took 34.811339511s to StartCluster
	I0916 11:42:00.717650  333016 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:42:00.717734  333016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:42:00.719920  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:42:00.720139  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:42:00.720142  333016 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:42:00.720381  333016 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:42:00.720426  333016 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:42:00.720490  333016 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-406673"
	I0916 11:42:00.720512  333016 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-406673"
	I0916 11:42:00.720537  333016 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:42:00.720922  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.720974  333016 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-406673"
	I0916 11:42:00.721002  333016 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-406673"
	I0916 11:42:00.721279  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.722177  333016 out.go:177] * Verifying Kubernetes components...
	I0916 11:42:00.723934  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:42:00.752502  333016 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-406673"
	I0916 11:42:00.752539  333016 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:42:00.755899  333016 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:42:00.756270  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.757582  333016 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:42:00.757605  333016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:42:00.757662  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:42:00.776137  333016 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:42:00.776158  333016 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:42:00.776215  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:42:00.777250  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:42:00.793326  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:42:01.011292  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:42:01.019742  333016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:42:01.096506  333016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:42:01.120265  333016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:42:01.516905  333016 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:42:01.535935  333016 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:42:01.796472  333016 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:42:01.798178  333016 addons.go:510] duration metric: took 1.077738203s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:42:02.021938  333016 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-406673" context rescaled to 1 replicas
	I0916 11:42:03.540269  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:06.039405  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:08.039450  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:10.578149  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:13.039705  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:15.040491  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:17.539137  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:19.539764  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:22.039970  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:24.539528  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:27.039570  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:29.038931  333016 node_ready.go:49] node "old-k8s-version-406673" has status "Ready":"True"
	I0916 11:42:29.038954  333016 node_ready.go:38] duration metric: took 27.502986487s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:42:29.038963  333016 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:42:29.045578  333016 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:31.049070  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:42:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:42:33.049733  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:42:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:42:35.051703  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:37.552157  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:40.051048  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:40.551252  333016 pod_ready.go:93] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"True"
	I0916 11:42:40.551275  333016 pod_ready.go:82] duration metric: took 11.505673624s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:40.551286  333016 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:42.558047  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:45.057493  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:47.057603  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:49.556869  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:51.557684  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:54.056762  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:56.058223  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:58.557744  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:01.057276  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:03.058237  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:05.557660  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:08.057228  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:10.057485  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:12.556652  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:14.557496  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:17.057859  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:19.058214  333016 pod_ready.go:93] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.058243  333016 pod_ready.go:82] duration metric: took 38.506948862s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.058265  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.063031  333016 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.063055  333016 pod_ready.go:82] duration metric: took 4.781482ms for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.063071  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.069862  333016 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.069881  333016 pod_ready.go:82] duration metric: took 6.802265ms for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.069890  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.074303  333016 pod_ready.go:93] pod "kube-proxy-pcbvp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.074328  333016 pod_ready.go:82] duration metric: took 4.43151ms for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.074338  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.078134  333016 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.078154  333016 pod_ready.go:82] duration metric: took 3.809778ms for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.078164  333016 pod_ready.go:39] duration metric: took 50.039189729s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:43:19.078180  333016 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:43:19.078230  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:19.078279  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:19.114156  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:19.114176  333016 cri.go:89] found id: ""
	I0916 11:43:19.114183  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:19.114235  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.117974  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:19.118035  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:19.152156  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:19.152181  333016 cri.go:89] found id: ""
	I0916 11:43:19.152192  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:19.152246  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.155805  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:19.155863  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:19.190036  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:19.190057  333016 cri.go:89] found id: ""
	I0916 11:43:19.190064  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:19.190111  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.193389  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:19.193445  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:19.227236  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:19.227263  333016 cri.go:89] found id: ""
	I0916 11:43:19.227270  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:19.227325  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.230784  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:19.230843  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:19.264360  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:19.264380  333016 cri.go:89] found id: ""
	I0916 11:43:19.264388  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:19.264437  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.267844  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:19.267916  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:19.300894  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:19.300916  333016 cri.go:89] found id: ""
	I0916 11:43:19.300925  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:19.300982  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.304410  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:19.304463  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:19.338532  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:19.338561  333016 cri.go:89] found id: ""
	I0916 11:43:19.338570  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:19.338617  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.342059  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:19.342087  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:19.375568  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:19.375598  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:19.412566  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:19.412600  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:19.447709  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:19.447738  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:19.485244  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:19.485272  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:19.583549  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:19.583577  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:19.619156  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:19.619188  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:19.664569  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:19.664605  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:19.698129  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:19.698158  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:19.747705  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:19.747738  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:19.798683  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:19.798720  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:19.862046  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:19.862082  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:22.384464  333016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:43:22.396937  333016 api_server.go:72] duration metric: took 1m21.676729889s to wait for apiserver process to appear ...
	I0916 11:43:22.396965  333016 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:43:22.397008  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:22.397062  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:22.430612  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:22.430638  333016 cri.go:89] found id: ""
	I0916 11:43:22.430646  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:22.430694  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.434324  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:22.434382  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:22.469323  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:22.469375  333016 cri.go:89] found id: ""
	I0916 11:43:22.469385  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:22.469455  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.473369  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:22.473438  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:22.507487  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:22.507514  333016 cri.go:89] found id: ""
	I0916 11:43:22.507524  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:22.507610  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.511481  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:22.511553  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:22.546774  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:22.546797  333016 cri.go:89] found id: ""
	I0916 11:43:22.546806  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:22.546854  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.550741  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:22.550815  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:22.584441  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:22.584466  333016 cri.go:89] found id: ""
	I0916 11:43:22.584478  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:22.584518  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.587995  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:22.588052  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:22.621210  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:22.621232  333016 cri.go:89] found id: ""
	I0916 11:43:22.621238  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:22.621288  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.624788  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:22.624860  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:22.659577  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:22.659601  333016 cri.go:89] found id: ""
	I0916 11:43:22.659622  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:22.659672  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.663356  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:22.663381  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:22.759410  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:22.759439  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:22.794834  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:22.794863  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:22.834275  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:22.834316  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:22.868286  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:22.868315  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:22.917081  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:22.917114  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:22.967952  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:22.967987  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:23.027899  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:23.027937  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:23.048542  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:23.048576  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:23.086646  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:23.086676  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:23.122143  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:23.122173  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:23.169305  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:23.169352  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:25.703925  333016 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:43:25.710132  333016 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:43:25.711030  333016 api_server.go:141] control plane version: v1.20.0
	I0916 11:43:25.711051  333016 api_server.go:131] duration metric: took 3.314079399s to wait for apiserver health ...
	I0916 11:43:25.711059  333016 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:43:25.711077  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:25.711124  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:25.744083  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:25.744104  333016 cri.go:89] found id: ""
	I0916 11:43:25.744114  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:25.744169  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.747732  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:25.747806  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:25.780830  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:25.780855  333016 cri.go:89] found id: ""
	I0916 11:43:25.780864  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:25.780905  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.784503  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:25.784565  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:25.819038  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:25.819061  333016 cri.go:89] found id: ""
	I0916 11:43:25.819068  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:25.819116  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.822868  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:25.822952  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:25.857513  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:25.857536  333016 cri.go:89] found id: ""
	I0916 11:43:25.857545  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:25.857604  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.861133  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:25.861199  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:25.895136  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:25.895165  333016 cri.go:89] found id: ""
	I0916 11:43:25.895175  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:25.895233  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.898774  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:25.898849  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:25.932895  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:25.932918  333016 cri.go:89] found id: ""
	I0916 11:43:25.932927  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:25.932981  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.936427  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:25.936488  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:25.972284  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:25.972305  333016 cri.go:89] found id: ""
	I0916 11:43:25.972312  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:25.972351  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.975973  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:25.976004  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:25.996792  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:25.996823  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:26.043167  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:26.043205  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:26.079042  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:26.079070  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:26.116242  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:26.116270  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:26.152271  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:26.152296  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:26.202878  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:26.202913  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:26.264457  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:26.264495  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:26.363604  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:26.363636  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:26.398030  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:26.398055  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:26.431498  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:26.431531  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:26.479671  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:26.479703  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:29.023411  333016 system_pods.go:59] 8 kube-system pods found
	I0916 11:43:29.023440  333016 system_pods.go:61] "coredns-74ff55c5b-6xlgw" [684992a2-7081-4df3-a73e-a21569a28ce6] Running
	I0916 11:43:29.023445  333016 system_pods.go:61] "etcd-old-k8s-version-406673" [d8c0d4cd-1c4a-4881-9f18-d54a4433f8ab] Running
	I0916 11:43:29.023448  333016 system_pods.go:61] "kindnet-mjcgf" [5888dd63-6767-4920-ac13-becf70cd6481] Running
	I0916 11:43:29.023452  333016 system_pods.go:61] "kube-apiserver-old-k8s-version-406673" [00ed1d06-176e-453e-a0bf-29244d78687c] Running
	I0916 11:43:29.023455  333016 system_pods.go:61] "kube-controller-manager-old-k8s-version-406673" [5b6c1595-560a-41d9-b653-9bf2a5c85f67] Running
	I0916 11:43:29.023459  333016 system_pods.go:61] "kube-proxy-pcbvp" [d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1] Running
	I0916 11:43:29.023462  333016 system_pods.go:61] "kube-scheduler-old-k8s-version-406673" [d6f812b4-bf33-454d-8375-fe804f003016] Running
	I0916 11:43:29.023465  333016 system_pods.go:61] "storage-provisioner" [28d14db2-66e4-43f6-8288-4ddc0f3a994c] Running
	I0916 11:43:29.023471  333016 system_pods.go:74] duration metric: took 3.312405641s to wait for pod list to return data ...
	I0916 11:43:29.023478  333016 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:43:29.025649  333016 default_sa.go:45] found service account: "default"
	I0916 11:43:29.025676  333016 default_sa.go:55] duration metric: took 2.190408ms for default service account to be created ...
	I0916 11:43:29.025686  333016 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:43:29.033323  333016 system_pods.go:86] 8 kube-system pods found
	I0916 11:43:29.033381  333016 system_pods.go:89] "coredns-74ff55c5b-6xlgw" [684992a2-7081-4df3-a73e-a21569a28ce6] Running
	I0916 11:43:29.033390  333016 system_pods.go:89] "etcd-old-k8s-version-406673" [d8c0d4cd-1c4a-4881-9f18-d54a4433f8ab] Running
	I0916 11:43:29.033396  333016 system_pods.go:89] "kindnet-mjcgf" [5888dd63-6767-4920-ac13-becf70cd6481] Running
	I0916 11:43:29.033405  333016 system_pods.go:89] "kube-apiserver-old-k8s-version-406673" [00ed1d06-176e-453e-a0bf-29244d78687c] Running
	I0916 11:43:29.033411  333016 system_pods.go:89] "kube-controller-manager-old-k8s-version-406673" [5b6c1595-560a-41d9-b653-9bf2a5c85f67] Running
	I0916 11:43:29.033418  333016 system_pods.go:89] "kube-proxy-pcbvp" [d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1] Running
	I0916 11:43:29.033423  333016 system_pods.go:89] "kube-scheduler-old-k8s-version-406673" [d6f812b4-bf33-454d-8375-fe804f003016] Running
	I0916 11:43:29.033431  333016 system_pods.go:89] "storage-provisioner" [28d14db2-66e4-43f6-8288-4ddc0f3a994c] Running
	I0916 11:43:29.033444  333016 system_pods.go:126] duration metric: took 7.751194ms to wait for k8s-apps to be running ...
	I0916 11:43:29.033457  333016 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:43:29.033512  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:43:29.045813  333016 system_svc.go:56] duration metric: took 12.349678ms WaitForService to wait for kubelet
	I0916 11:43:29.045837  333016 kubeadm.go:582] duration metric: took 1m28.325673057s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:43:29.045852  333016 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:43:29.048437  333016 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:43:29.048464  333016 node_conditions.go:123] node cpu capacity is 8
	I0916 11:43:29.048478  333016 node_conditions.go:105] duration metric: took 2.620808ms to run NodePressure ...
	I0916 11:43:29.048492  333016 start.go:241] waiting for startup goroutines ...
	I0916 11:43:29.048501  333016 start.go:246] waiting for cluster config update ...
	I0916 11:43:29.048515  333016 start.go:255] writing updated cluster config ...
	I0916 11:43:29.048782  333016 ssh_runner.go:195] Run: rm -f paused
	I0916 11:43:29.055620  333016 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-406673" cluster and "default" namespace by default
	E0916 11:43:29.057070  333016 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.856770052Z" level=info msg="Checking pod kube-system_coredns-74ff55c5b-6xlgw for CNI network kindnet (type=ptp)"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.859357089Z" level=info msg="Ran pod sandbox eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32 with infra container: kube-system/storage-provisioner/POD" id=c902770d-194b-4540-8ac4-7301f0545b96 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.859526385Z" level=info msg="Ran pod sandbox 15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33 with infra container: kube-system/coredns-74ff55c5b-6xlgw/POD" id=f576a970-6d7c-4b43-af9e-da0ea0eb3ad3 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860222892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c4d6a36e-e674-485c-a8db-f3ac539a2447 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860278624Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=10f143f9-9fa6-4a76-a15b-32952af72ee1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860399431Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c4d6a36e-e674-485c-a8db-f3ac539a2447 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860422997Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=10f143f9-9fa6-4a76-a15b-32952af72ee1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860976080Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6bb0be3-9f1a-4237-81de-68bd60b184b1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861016518Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=306f539a-c560-4371-8f00-331724f83370 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861171586Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=306f539a-c560-4371-8f00-331724f83370 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861259251Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b6bb0be3-9f1a-4237-81de-68bd60b184b1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862001870Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=eb2e30c4-75e5-4521-ab72-cc7869c1fce1 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862040425Z" level=info msg="Creating container: kube-system/coredns-74ff55c5b-6xlgw/coredns" id=1df64f26-2035-4f6b-95f6-226bec645aec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862080433Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862120024Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.878701585Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1e2bfb353a2952745b9f6b0c04ba55371973020ad1e5e874c5dd82658c63be84/merged/etc/passwd: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.878750411Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1e2bfb353a2952745b9f6b0c04ba55371973020ad1e5e874c5dd82658c63be84/merged/etc/group: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.879154582Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/99e7b464885912b28b588c11a83ff47920ae95ffea4c649719c5189f8ead6e3c/merged/etc/passwd: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.879187538Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/99e7b464885912b28b588c11a83ff47920ae95ffea4c649719c5189f8ead6e3c/merged/etc/group: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.918889133Z" level=info msg="Created container d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0: kube-system/coredns-74ff55c5b-6xlgw/coredns" id=1df64f26-2035-4f6b-95f6-226bec645aec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.919466702Z" level=info msg="Starting container: d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0" id=38ac1fc0-fac9-4a00-8484-820e0b437755 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.922270175Z" level=info msg="Created container 33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d: kube-system/storage-provisioner/storage-provisioner" id=eb2e30c4-75e5-4521-ab72-cc7869c1fce1 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.922845211Z" level=info msg="Starting container: 33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d" id=7db5cf50-ec68-4b78-aebf-9b05d6d07e42 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.926237079Z" level=info msg="Started container" PID=2929 containerID=d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0 description=kube-system/coredns-74ff55c5b-6xlgw/coredns id=38ac1fc0-fac9-4a00-8484-820e0b437755 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.929776182Z" level=info msg="Started container" PID=2936 containerID=33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d description=kube-system/storage-provisioner/storage-provisioner id=7db5cf50-ec68-4b78-aebf-9b05d6d07e42 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4db88b336bed       bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16                                     57 seconds ago       Running             coredns                   0                   15c3605023254       coredns-74ff55c5b-6xlgw
	33a7974b5f09f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     57 seconds ago       Running             storage-provisioner       0                   eee3fde4da330       storage-provisioner
	342a012c428e0       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   About a minute ago   Running             kindnet-cni               0                   3d1945d7b04c2       kindnet-mjcgf
	de3eaebd990dc       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc                                     About a minute ago   Running             kube-proxy                0                   8c9b9fc80cd42       kube-proxy-pcbvp
	6f6e59b67f114       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899                                     About a minute ago   Running             kube-scheduler            0                   dbdf46e21272e       kube-scheduler-old-k8s-version-406673
	31259a2842c01       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99                                     About a minute ago   Running             kube-apiserver            0                   2bf825db35d7b       kube-apiserver-old-k8s-version-406673
	1612fad1a4d07       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                     About a minute ago   Running             etcd                      0                   1eaac4c5376fc       etcd-old-k8s-version-406673
	9aff740155270       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080                                     About a minute ago   Running             kube-controller-manager   0                   483dd0ba7fd68       kube-controller-manager-old-k8s-version-406673
	
	
	==> coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:38442 - 48402 "HINFO IN 8440324266966115617.7448481208015864567. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011622953s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-406673
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-406673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-406673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:41:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-406673
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:43:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-406673
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 318da86b3a3c4fd0827c12705ac51529
	  System UUID:                2d5bda39-09b0-43d0-95f9-1ff418499524
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-6xlgw                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     92s
	  kube-system                 etcd-old-k8s-version-406673                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         103s
	  kube-system                 kindnet-mjcgf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-old-k8s-version-406673             250m (3%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-old-k8s-version-406673    200m (2%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-pcbvp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-old-k8s-version-406673             100m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 104s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s  kubelet     Node old-k8s-version-406673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientPID
	  Normal  Starting                 91s   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                64s   kubelet     Node old-k8s-version-406673 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] <==
	2024-09-16 11:41:36.508821 I | embed: listening for peers on 192.168.103.2:2380
	2024-09-16 11:41:36.508921 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 is starting a new election at term 1
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 became candidate at term 2
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 became leader at term 2
	raft2024/09/16 11:41:37 INFO: raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2
	2024-09-16 11:41:37.294464 I | etcdserver: published {Name:old-k8s-version-406673 ClientURLs:[https://192.168.103.2:2379]} to cluster 3336683c081d149d
	2024-09-16 11:41:37.294487 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-16 11:41:37.294537 I | embed: ready to serve client requests
	2024-09-16 11:41:37.294728 I | embed: ready to serve client requests
	2024-09-16 11:41:37.295159 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-16 11:41:37.296260 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-16 11:41:37.297103 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-16 11:41:37.298217 I | embed: serving client requests on 192.168.103.2:2379
	2024-09-16 11:41:55.011036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:04.397724 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:14.397752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:24.397850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:34.397672 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:44.397732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:54.397786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:04.397868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:14.397710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:24.397875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:43:32 up  1:25,  0 users,  load average: 0.94, 1.11, 0.91
	Linux old-k8s-version-406673 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] <==
	I0916 11:42:05.095640       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:42:05.095656       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:42:05.095674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:42:05.394421       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:42:05.394469       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:42:05.394477       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:42:05.695253       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:42:05.695279       1 metrics.go:61] Registering metrics
	I0916 11:42:05.695331       1 controller.go:374] Syncing nftables rules
	I0916 11:42:15.397552       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:15.397613       1 main.go:299] handling current node
	I0916 11:42:25.398751       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:25.398783       1 main.go:299] handling current node
	I0916 11:42:35.395218       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:35.395262       1 main.go:299] handling current node
	I0916 11:42:45.397419       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:45.397464       1 main.go:299] handling current node
	I0916 11:42:55.402217       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:55.402249       1 main.go:299] handling current node
	I0916 11:43:05.394944       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:05.394981       1 main.go:299] handling current node
	I0916 11:43:15.397437       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:15.397487       1 main.go:299] handling current node
	I0916 11:43:25.397439       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:25.397514       1 main.go:299] handling current node
	
	
	==> kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] <==
	I0916 11:41:41.453400       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0916 11:41:41.453431       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 11:41:41.458485       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0916 11:41:41.461410       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:41:41.461427       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0916 11:41:41.806470       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:41:41.841007       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0916 11:41:41.917086       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:41:41.918224       1 controller.go:606] quota admission added evaluator for: endpoints
	I0916 11:41:41.921847       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:41:42.967364       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0916 11:41:43.351236       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0916 11:41:43.504028       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0916 11:41:48.768075       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:42:00.244433       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:42:00.297844       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0916 11:42:11.190173       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:42:11.190214       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:42:11.190222       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:42:42.093321       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:42:42.093393       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:42:42.093403       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:43:20.270631       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:43:20.270672       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:43:20.270679       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] <==
	I0916 11:42:00.295414       1 shared_informer.go:247] Caches are synced for endpoint 
	I0916 11:42:00.295446       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0916 11:42:00.301750       1 range_allocator.go:373] Set node old-k8s-version-406673 PodCIDR to [10.244.0.0/24]
	I0916 11:42:00.302502       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0916 11:42:00.303655       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pcbvp"
	I0916 11:42:00.303679       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mjcgf"
	I0916 11:42:00.307508       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-406673" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0916 11:42:00.312688       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-q8x49"
	I0916 11:42:00.321152       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-6xlgw"
	I0916 11:42:00.393566       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0916 11:42:00.393684       1 shared_informer.go:247] Caches are synced for HPA 
	I0916 11:42:00.393856       1 shared_informer.go:247] Caches are synced for disruption 
	I0916 11:42:00.393875       1 disruption.go:339] Sending events to api server.
	E0916 11:42:00.408825       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"366e9dff-395f-41eb-aaa4-5fe8a77c24b1", ResourceVersion:"267", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862083703, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014c20c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014c20e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014c2100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2160), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014c2180)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014c21c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0010e7ce0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0005f4238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000430fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00060a638)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0005f4280)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0916 11:42:00.423745       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:42:00.453287       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0916 11:42:00.469890       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:42:00.626879       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0916 11:42:00.895582       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:42:00.895685       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 11:42:00.927104       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:42:01.537394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0916 11:42:01.602983       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-q8x49"
	I0916 11:42:30.295800       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	
	==> kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] <==
	I0916 11:42:00.995500       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:42:00.995590       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:42:01.010731       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:42:01.010826       1 server_others.go:185] Using iptables Proxier.
	I0916 11:42:01.012001       1 server.go:650] Version: v1.20.0
	I0916 11:42:01.013499       1 config.go:315] Starting service config controller
	I0916 11:42:01.013577       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:42:01.013592       1 config.go:224] Starting endpoint slice config controller
	I0916 11:42:01.013614       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:42:01.113797       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:42:01.113806       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] <==
	W0916 11:41:40.476670       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:41:40.476699       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:41:40.476709       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:41:40.476720       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:41:40.516274       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:41:40.516365       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:41:40.516377       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:41:40.516397       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0916 11:41:40.517924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.524733       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:41:40.593689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.593833       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:41:40.594045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:41:40.594338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:41:40.594501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:40.594699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:41:40.594858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:41:40.595116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:41:40.595261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:41:40.595399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:41:41.428933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:41.508045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.594591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.695406       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0916 11:41:44.916550       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.318827    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.395303    2069 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.396094    2069 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: E0916 11:42:00.396471    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495219    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/5888dd63-6767-4920-ac13-becf70cd6481-xtables-lock") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495265    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-kube-proxy") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495307    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-h79b7" (UniqueName: "kubernetes.io/secret/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-kube-proxy-token-h79b7") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495404    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-lib-modules") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495507    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/5888dd63-6767-4920-ac13-becf70cd6481-cni-cfg") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495548    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5888dd63-6767-4920-ac13-becf70cd6481-lib-modules") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495604    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-c5qt9" (UniqueName: "kubernetes.io/secret/5888dd63-6767-4920-ac13-becf70cd6481-kindnet-token-c5qt9") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495632    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-xtables-lock") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: W0916 11:42:00.633660    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-3d1945d7b04c2d25d7a1cc6d0bafc6adce69c9f092118e0e86af68ccc80d1014 WatchSource:0}: Error finding container 3d1945d7b04c2d25d7a1cc6d0bafc6adce69c9f092118e0e86af68ccc80d1014: Status 404 returned error &{%!s(*http.body=&{0xc0009ffd80 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: W0916 11:42:00.640993    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-8c9b9fc80cd428329dc256f5b234864e1037d0a44e37ad7d8aa19e4546d83c7a WatchSource:0}: Error finding container 8c9b9fc80cd428329dc256f5b234864e1037d0a44e37ad7d8aa19e4546d83c7a: Status 404 returned error &{%!s(*http.body=&{0xc000e4daa0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:03 old-k8s-version-406673 kubelet[2069]: E0916 11:42:03.893546    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:08 old-k8s-version-406673 kubelet[2069]: E0916 11:42:08.894227    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:13 old-k8s-version-406673 kubelet[2069]: E0916 11:42:13.894965    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.532522    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.534500    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669791    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-767ft" (UniqueName: "kubernetes.io/secret/28d14db2-66e4-43f6-8288-4ddc0f3a994c-storage-provisioner-token-767ft") pod "storage-provisioner" (UID: "28d14db2-66e4-43f6-8288-4ddc0f3a994c")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669832    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/28d14db2-66e4-43f6-8288-4ddc0f3a994c-tmp") pod "storage-provisioner" (UID: "28d14db2-66e4-43f6-8288-4ddc0f3a994c")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669854    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/684992a2-7081-4df3-a73e-a21569a28ce6-config-volume") pod "coredns-74ff55c5b-6xlgw" (UID: "684992a2-7081-4df3-a73e-a21569a28ce6")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669868    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-75kvx" (UniqueName: "kubernetes.io/secret/684992a2-7081-4df3-a73e-a21569a28ce6-coredns-token-75kvx") pod "coredns-74ff55c5b-6xlgw" (UID: "684992a2-7081-4df3-a73e-a21569a28ce6")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: W0916 11:42:33.858343    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32 WatchSource:0}: Error finding container eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32: Status 404 returned error &{%!s(*http.body=&{0xc0001a8060 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: W0916 11:42:33.859070    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33 WatchSource:0}: Error finding container 15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33: Status 404 returned error &{%!s(*http.body=&{0xc0001b7f60 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	
	
	==> storage-provisioner [33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d] <==
	I0916 11:42:33.942881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:42:33.952289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:42:33.952327       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:42:33.995195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:42:33.995263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88c65391-c353-4f97-bac8-9bd49b9f0588", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77 became leader
	I0916 11:42:33.995326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	I0916 11:42:34.095721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (572.669µs)
helpers_test.go:263: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (3.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-406673 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-406673 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (581.415µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-406673 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-406673
helpers_test.go:235: (dbg) docker inspect old-k8s-version-406673:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b",
	        "Created": "2024-09-16T11:41:15.966557614Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:41:16.106919451Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hosts",
	        "LogPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b-json.log",
	        "Name": "/old-k8s-version-406673",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-406673:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-406673",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-406673",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-406673/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-406673",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eeb5fb104290f5dbbc6dda4f44d1ede524b4eca3b4a1c4e74d210afee339b2c7",
	            "SandboxKey": "/var/run/docker/netns/eeb5fb104290",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-406673": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49cf3e3468396ba01b588ae85b5e7bcdf3e6dcfeb05d207136018542ad1d54df",
	                    "EndpointID": "fd3146eb8ec55f5e8ad65367f8d3d1c86c03f630bbe9fea4a483f6e09022f0f3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-406673",
	                        "28d6c5fc26a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25: (1.176105253s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo journalctl -xeu kubelet                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC | 16 Sep 24 11:40 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC |                     |
	|         | sudo systemctl status docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo docker system info                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                 |                           |         |         |                     |                     |
	|         | cri-docker --all --full                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat cri-docker                         |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                 | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf  |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                 | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                 |                           |         |         |                     |                     |
	|         | containerd --all --full                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                         |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                 | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                           |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                         |                           |         |         |                     |                     |
	|         | \;                                                    |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                      |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                          | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                             | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                        | custom-flannel-838467     | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                            |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673       | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:41:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:41:09.129839  333016 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:41:09.130137  333016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:41:09.130147  333016 out.go:358] Setting ErrFile to fd 2...
	I0916 11:41:09.130151  333016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:41:09.130336  333016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:41:09.130914  333016 out.go:352] Setting JSON to false
	I0916 11:41:09.132012  333016 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5009,"bootTime":1726481860,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:41:09.132115  333016 start.go:139] virtualization: kvm guest
	I0916 11:41:07.485553  326192 out.go:235]   - Booting up control plane ...
	I0916 11:41:07.485672  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:41:07.485744  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:41:07.486328  326192 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:41:07.495914  326192 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:41:07.501658  326192 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:41:07.501769  326192 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:41:07.587736  326192 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:41:07.587886  326192 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:41:08.094403  326192 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.791161ms
	I0916 11:41:08.094558  326192 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:41:09.134384  333016 out.go:177] * [old-k8s-version-406673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:41:09.136012  333016 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:41:09.136030  333016 notify.go:220] Checking for updates...
	I0916 11:41:09.138120  333016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:41:09.139236  333016 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:41:09.140392  333016 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:41:09.141671  333016 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:41:09.142978  333016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:41:09.144925  333016 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145143  333016 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145276  333016 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:09.145451  333016 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:41:09.170223  333016 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:41:09.170315  333016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:41:09.249446  333016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2024-09-16 11:41:09.232481204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:41:09.249584  333016 docker.go:318] overlay module found
	I0916 11:41:09.251484  333016 out.go:177] * Using the docker driver based on user configuration
	I0916 11:41:09.252770  333016 start.go:297] selected driver: docker
	I0916 11:41:09.252787  333016 start.go:901] validating driver "docker" against <nil>
	I0916 11:41:09.252803  333016 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:41:09.253988  333016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:41:09.311590  333016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2024-09-16 11:41:09.299494045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:41:09.311826  333016 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:41:09.312127  333016 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:41:09.314426  333016 out.go:177] * Using Docker driver with root privileges
	I0916 11:41:09.316047  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:09.316117  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:09.316131  333016 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:41:09.316215  333016 start.go:340] cluster config:
	{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:41:09.318014  333016 out.go:177] * Starting "old-k8s-version-406673" primary control-plane node in "old-k8s-version-406673" cluster
	I0916 11:41:09.319369  333016 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:41:09.320800  333016 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:41:09.322158  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:09.322191  333016 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:41:09.322200  333016 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 11:41:09.322238  333016 cache.go:56] Caching tarball of preloaded images
	I0916 11:41:09.322344  333016 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:41:09.322360  333016 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 11:41:09.322470  333016 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:41:09.322492  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json: {Name:mk5b7a46b7adef06d8ab94be0a464e9f79922d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:41:09.347179  333016 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:41:09.347202  333016 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:41:09.347274  333016 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:41:09.347293  333016 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:41:09.347302  333016 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:41:09.347311  333016 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:41:09.347321  333016 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:41:09.415165  333016 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:41:09.415223  333016 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:41:09.415268  333016 start.go:360] acquireMachinesLock for old-k8s-version-406673: {Name:mk8e16c995170a3c051ae96503b85729d385d06f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:41:09.415392  333016 start.go:364] duration metric: took 100.574µs to acquireMachinesLock for "old-k8s-version-406673"
	I0916 11:41:09.415421  333016 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:41:09.415511  333016 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:41:13.095977  326192 kubeadm.go:310] [api-check] The API server is healthy after 5.001444204s
	I0916 11:41:13.108645  326192 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:41:13.124915  326192 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:41:13.145729  326192 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:41:13.146046  326192 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-838467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:41:13.155883  326192 kubeadm.go:310] [bootstrap-token] Using token: arlmm3.z93mcdj0fcofrw2j
	I0916 11:41:09.417700  333016 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:41:09.418702  333016 start.go:159] libmachine.API.Create for "old-k8s-version-406673" (driver="docker")
	I0916 11:41:09.418758  333016 client.go:168] LocalClient.Create starting
	I0916 11:41:09.418863  333016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:41:09.418984  333016 main.go:141] libmachine: Decoding PEM data...
	I0916 11:41:09.419005  333016 main.go:141] libmachine: Parsing certificate...
	I0916 11:41:09.419062  333016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:41:09.419084  333016 main.go:141] libmachine: Decoding PEM data...
	I0916 11:41:09.419096  333016 main.go:141] libmachine: Parsing certificate...
	I0916 11:41:09.419492  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:41:09.447356  333016 cli_runner.go:211] docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:41:09.447439  333016 network_create.go:284] running [docker network inspect old-k8s-version-406673] to gather additional debugging logs...
	I0916 11:41:09.447459  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673
	W0916 11:41:09.466477  333016 cli_runner.go:211] docker network inspect old-k8s-version-406673 returned with exit code 1
	I0916 11:41:09.466514  333016 network_create.go:287] error running [docker network inspect old-k8s-version-406673]: docker network inspect old-k8s-version-406673: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-406673 not found
	I0916 11:41:09.466528  333016 network_create.go:289] output of [docker network inspect old-k8s-version-406673]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-406673 not found
	
	** /stderr **
	I0916 11:41:09.466624  333016 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:41:09.484833  333016 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:41:09.485829  333016 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:41:09.486598  333016 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:41:09.487223  333016 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:41:09.487906  333016 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:41:09.488504  333016 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:41:09.489409  333016 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002378380}
	I0916 11:41:09.489435  333016 network_create.go:124] attempt to create docker network old-k8s-version-406673 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:41:09.489487  333016 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-406673 old-k8s-version-406673
	I0916 11:41:09.569199  333016 network_create.go:108] docker network old-k8s-version-406673 192.168.103.0/24 created
	I0916 11:41:09.569238  333016 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-406673" container
	I0916 11:41:09.569290  333016 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:41:09.589253  333016 cli_runner.go:164] Run: docker volume create old-k8s-version-406673 --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:41:09.614891  333016 oci.go:103] Successfully created a docker volume old-k8s-version-406673
	I0916 11:41:09.614987  333016 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-406673-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --entrypoint /usr/bin/test -v old-k8s-version-406673:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:41:10.191535  333016 oci.go:107] Successfully prepared a docker volume old-k8s-version-406673
	I0916 11:41:10.191600  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:10.191641  333016 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:41:10.191709  333016 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-406673:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:41:13.157532  326192 out.go:235]   - Configuring RBAC rules ...
	I0916 11:41:13.157708  326192 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:41:13.161760  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:41:13.168287  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:41:13.171578  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:41:13.175747  326192 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:41:13.178942  326192 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:41:13.556267  326192 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:41:14.729155  326192 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:41:15.223914  326192 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:41:15.225001  326192 kubeadm.go:310] 
	I0916 11:41:15.225130  326192 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:41:15.225153  326192 kubeadm.go:310] 
	I0916 11:41:15.225274  326192 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:41:15.225295  326192 kubeadm.go:310] 
	I0916 11:41:15.225327  326192 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:41:15.225442  326192 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:41:15.225506  326192 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:41:15.225513  326192 kubeadm.go:310] 
	I0916 11:41:15.225585  326192 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:41:15.225594  326192 kubeadm.go:310] 
	I0916 11:41:15.225655  326192 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:41:15.225664  326192 kubeadm.go:310] 
	I0916 11:41:15.225726  326192 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:41:15.225793  326192 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:41:15.225858  326192 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:41:15.225864  326192 kubeadm.go:310] 
	I0916 11:41:15.225946  326192 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:41:15.226044  326192 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:41:15.226052  326192 kubeadm.go:310] 
	I0916 11:41:15.226146  326192 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token arlmm3.z93mcdj0fcofrw2j \
	I0916 11:41:15.226292  326192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:41:15.226330  326192 kubeadm.go:310] 	--control-plane 
	I0916 11:41:15.226339  326192 kubeadm.go:310] 
	I0916 11:41:15.226452  326192 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:41:15.226462  326192 kubeadm.go:310] 
	I0916 11:41:15.226567  326192 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token arlmm3.z93mcdj0fcofrw2j \
	I0916 11:41:15.226726  326192 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:41:15.230177  326192 kubeadm.go:310] W0916 11:41:05.103778    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:41:15.230544  326192 kubeadm.go:310] W0916 11:41:05.104714    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:41:15.230854  326192 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:41:15.231019  326192 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:41:15.231059  326192 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0916 11:41:15.240253  326192 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0916 11:41:15.886029  333016 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-406673:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.694248034s)
	I0916 11:41:15.886060  333016 kic.go:203] duration metric: took 5.694418556s to extract preloaded images to volume ...
	W0916 11:41:15.886197  333016 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:41:15.886315  333016 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:41:15.946925  333016 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-406673 --name old-k8s-version-406673 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-406673 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-406673 --network old-k8s-version-406673 --ip 192.168.103.2 --volume old-k8s-version-406673:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:41:16.264153  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Running}}
	I0916 11:41:16.284080  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.304543  333016 cli_runner.go:164] Run: docker exec old-k8s-version-406673 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:41:16.352309  333016 oci.go:144] the created container "old-k8s-version-406673" has a running status.
	I0916 11:41:16.352352  333016 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa...
	I0916 11:41:16.892301  333016 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:41:16.913952  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.935779  333016 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:41:16.935806  333016 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-406673 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:41:16.980961  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:41:16.999374  333016 machine.go:93] provisionDockerMachine start ...
	I0916 11:41:16.999449  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.020322  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.020675  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.020700  333016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:41:17.161159  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:41:17.161186  333016 ubuntu.go:169] provisioning hostname "old-k8s-version-406673"
	I0916 11:41:17.161236  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.179941  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.180126  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.180140  333016 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-406673 && echo "old-k8s-version-406673" | sudo tee /etc/hostname
	I0916 11:41:17.325696  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:41:17.325767  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.343273  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.343458  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.343478  333016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-406673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-406673/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-406673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:41:17.481523  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:41:17.481554  333016 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:41:17.481617  333016 ubuntu.go:177] setting up certificates
	I0916 11:41:17.481627  333016 provision.go:84] configureAuth start
	I0916 11:41:17.481677  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:17.501103  333016 provision.go:143] copyHostCerts
	I0916 11:41:17.501181  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:41:17.501192  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:41:17.501278  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:41:17.501418  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:41:17.501433  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:41:17.501476  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:41:17.501610  333016 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:41:17.501622  333016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:41:17.501659  333016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:41:17.501734  333016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-406673 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-406673]
	I0916 11:41:17.565274  333016 provision.go:177] copyRemoteCerts
	I0916 11:41:17.565358  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:41:17.565401  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.584534  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:17.682900  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:41:17.707241  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:41:17.730893  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:41:17.754303  333016 provision.go:87] duration metric: took 272.661409ms to configureAuth
	I0916 11:41:17.754331  333016 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:41:17.754493  333016 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:41:17.754609  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:17.772647  333016 main.go:141] libmachine: Using SSH client type: native
	I0916 11:41:17.772839  333016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:41:17.772862  333016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:41:18.029309  333016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:41:18.029373  333016 machine.go:96] duration metric: took 1.029938873s to provisionDockerMachine
	I0916 11:41:18.029387  333016 client.go:171] duration metric: took 8.610622274s to LocalClient.Create
	I0916 11:41:18.029411  333016 start.go:167] duration metric: took 8.610712242s to libmachine.API.Create "old-k8s-version-406673"
	I0916 11:41:18.029423  333016 start.go:293] postStartSetup for "old-k8s-version-406673" (driver="docker")
	I0916 11:41:18.029438  333016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:41:18.029502  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:41:18.029565  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.053377  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.151531  333016 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:41:18.155078  333016 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:41:18.155116  333016 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:41:18.155127  333016 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:41:18.155135  333016 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:41:18.155148  333016 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:41:18.155221  333016 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:41:18.155343  333016 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:41:18.155459  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:41:18.164209  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:41:18.188983  333016 start.go:296] duration metric: took 159.545394ms for postStartSetup
	I0916 11:41:18.189414  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:18.208296  333016 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:41:18.208603  333016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:41:18.208646  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.226298  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.318240  333016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:41:18.322605  333016 start.go:128] duration metric: took 8.907078338s to createHost
	I0916 11:41:18.322633  333016 start.go:83] releasing machines lock for "old-k8s-version-406673", held for 8.907228105s
	I0916 11:41:18.322689  333016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:41:18.341454  333016 ssh_runner.go:195] Run: cat /version.json
	I0916 11:41:18.341497  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.341552  333016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:41:18.341624  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:41:18.361726  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.362565  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:41:18.531472  333016 ssh_runner.go:195] Run: systemctl --version
	I0916 11:41:18.535744  333016 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:41:18.683220  333016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:41:18.690107  333016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:41:18.713733  333016 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:41:18.713813  333016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:41:18.747022  333016 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:41:18.747047  333016 start.go:495] detecting cgroup driver to use...
	I0916 11:41:18.747084  333016 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:41:18.747140  333016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:41:18.762745  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:41:18.774503  333016 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:41:18.774568  333016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:41:18.787349  333016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:41:18.801095  333016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:41:18.890378  333016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:41:18.976389  333016 docker.go:233] disabling docker service ...
	I0916 11:41:18.976456  333016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:41:19.000019  333016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:41:19.012839  333016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:41:19.097510  333016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:41:15.242201  326192 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:41:15.242282  326192 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0916 11:41:15.247506  326192 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0916 11:41:15.247546  326192 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0916 11:41:15.272691  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:41:15.900673  326192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:41:15.900751  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:15.900763  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-838467 minikube.k8s.io/updated_at=2024_09_16T11_41_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=custom-flannel-838467 minikube.k8s.io/primary=true
	I0916 11:41:15.909744  326192 ops.go:34] apiserver oom_adj: -16
	I0916 11:41:16.023309  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:16.524490  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:17.023552  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:17.524056  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:18.023739  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:18.523649  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:19.024135  326192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:19.147138  326192 kubeadm.go:1113] duration metric: took 3.246461505s to wait for elevateKubeSystemPrivileges
	I0916 11:41:19.147176  326192 kubeadm.go:394] duration metric: took 14.233006135s to StartCluster
	I0916 11:41:19.147199  326192 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:19.147270  326192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:41:19.148868  326192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:19.149075  326192 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:41:19.149161  326192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:41:19.149222  326192 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:41:19.149310  326192 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-838467"
	I0916 11:41:19.149329  326192 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-838467"
	I0916 11:41:19.149371  326192 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-838467"
	I0916 11:41:19.149383  326192 host.go:66] Checking if "custom-flannel-838467" exists ...
	I0916 11:41:19.149387  326192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-838467"
	I0916 11:41:19.149454  326192 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:41:19.149819  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.150001  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.151132  326192 out.go:177] * Verifying Kubernetes components...
	I0916 11:41:19.152474  326192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:19.173524  326192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:19.203214  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:41:19.218863  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:41:19.238609  333016 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 11:41:19.238684  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.250087  333016 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:41:19.250145  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.259354  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.268531  333016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:41:19.279027  333016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:41:19.287949  333016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:41:19.297178  333016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:41:19.307577  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:19.387191  333016 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:41:19.487654  333016 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:41:19.487710  333016 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:41:19.491139  333016 start.go:563] Will wait 60s for crictl version
	I0916 11:41:19.491188  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:19.496116  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:41:19.544501  333016 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:41:19.544576  333016 ssh_runner.go:195] Run: crio --version
	I0916 11:41:19.578771  333016 ssh_runner.go:195] Run: crio --version
	I0916 11:41:19.643731  333016 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0916 11:41:19.173725  326192 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-838467"
	I0916 11:41:19.173990  326192 host.go:66] Checking if "custom-flannel-838467" exists ...
	I0916 11:41:19.174551  326192 cli_runner.go:164] Run: docker container inspect custom-flannel-838467 --format={{.State.Status}}
	I0916 11:41:19.175324  326192 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:41:19.175346  326192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:41:19.175405  326192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-838467
	I0916 11:41:19.197142  326192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/custom-flannel-838467/id_rsa Username:docker}
	I0916 11:41:19.198430  326192 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:41:19.198462  326192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:41:19.198538  326192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-838467
	I0916 11:41:19.224134  326192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/custom-flannel-838467/id_rsa Username:docker}
	I0916 11:41:19.335865  326192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:41:19.421603  326192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:41:19.422382  326192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:41:19.497244  326192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:41:19.839268  326192 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0916 11:41:20.148001  326192 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-838467" to be "Ready" ...
	I0916 11:41:20.158855  326192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:41:19.645160  333016 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:41:19.661707  333016 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:41:19.665380  333016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:41:19.676415  333016 kubeadm.go:883] updating cluster {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:41:19.676535  333016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:41:19.676579  333016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:41:19.742047  333016 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:41:19.742105  333016 ssh_runner.go:195] Run: which lz4
	I0916 11:41:19.745784  333016 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:41:19.749024  333016 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:41:19.749053  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 11:41:20.726623  333016 crio.go:462] duration metric: took 980.877496ms to copy over tarball
	I0916 11:41:20.726707  333016 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:41:23.267869  333016 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.541121164s)
	I0916 11:41:23.267903  333016 crio.go:469] duration metric: took 2.54124645s to extract the tarball
	I0916 11:41:23.267913  333016 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:41:23.340628  333016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:41:23.374342  333016 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:41:23.374368  333016 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:41:23.374427  333016 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.374457  333016 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:41:23.374497  333016 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.374502  333016 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.374514  333016 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.374530  333016 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.374495  333016 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.374427  333016 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:23.375894  333016 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.375896  333016 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.376044  333016 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.375896  333016 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.375906  333016 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.375906  333016 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:41:23.375914  333016 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.375914  333016 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:23.630361  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 11:41:23.660531  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.669314  333016 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:41:23.669405  333016 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:41:23.669458  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.677017  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.679340  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.682602  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.687346  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.706552  333016 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:41:23.706598  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.706602  333016 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.706706  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.733323  333016 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:41:23.733409  333016 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:41:23.733451  333016 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.733496  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.733421  333016 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.733568  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.738018  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.796536  333016 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:41:23.796583  333016 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.796639  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.807990  333016 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:41:23.808034  333016 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.808046  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.808076  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.809979  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:23.810071  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.810119  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.909741  333016 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:41:23.909838  333016 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:23.909861  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:23.909887  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:41:23.912887  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:23.912936  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:23.920082  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:41:23.920254  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:23.920369  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:24.097891  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.097902  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:24.110265  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:24.110310  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:41:24.110381  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:41:24.110394  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:41:24.112528  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:41:20.160096  326192 addons.go:510] duration metric: took 1.010872416s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:41:20.344573  326192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-838467" context rescaled to 1 replicas
	I0916 11:41:22.152238  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:24.231779  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.231878  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:41:24.299701  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:41:24.299787  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:41:24.299816  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:41:24.299863  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:41:24.330660  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:41:24.333761  333016 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:41:24.338478  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:41:24.405783  333016 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:41:24.516769  333016 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:41:24.655351  333016 cache_images.go:92] duration metric: took 1.280968033s to LoadCachedImages
	W0916 11:41:24.655436  333016 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0916 11:41:24.655451  333016 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 crio true true} ...
	I0916 11:41:24.655554  333016 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-406673 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:41:24.655630  333016 ssh_runner.go:195] Run: crio config
	I0916 11:41:24.698372  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:24.698394  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:24.698405  333016 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:41:24.698433  333016 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-406673 NodeName:old-k8s-version-406673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:41:24.698606  333016 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-406673"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:41:24.698743  333016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:41:24.708344  333016 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:41:24.708407  333016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:41:24.717550  333016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0916 11:41:24.734803  333016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:41:24.752339  333016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0916 11:41:24.769057  333016 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:41:24.772442  333016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:41:24.782978  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:41:24.858827  333016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:41:24.871739  333016 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673 for IP: 192.168.103.2
	I0916 11:41:24.871765  333016 certs.go:194] generating shared ca certs ...
	I0916 11:41:24.871782  333016 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:24.871958  333016 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:41:24.872020  333016 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:41:24.872037  333016 certs.go:256] generating profile certs ...
	I0916 11:41:24.872110  333016 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key
	I0916 11:41:24.872131  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt with IP's: []
	I0916 11:41:25.048291  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt ...
	I0916 11:41:25.048318  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: {Name:mk4abba6a67f25ef9c59bbcacc5c5dee31e9387f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.048539  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key ...
	I0916 11:41:25.048558  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key: {Name:mk1c39c492dfee9b396f585a47b8783f07fe5103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.048670  333016 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db
	I0916 11:41:25.048688  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:41:25.381754  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db ...
	I0916 11:41:25.381783  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db: {Name:mkba7ece117fcceb2e5dcd2de345d183af279101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.381974  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db ...
	I0916 11:41:25.381991  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db: {Name:mk163caf0f8c6bde6835ea80dd77b20aeeee31cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.382087  333016 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt.13b4f1db -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt
	I0916 11:41:25.382180  333016 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key
	I0916 11:41:25.382257  333016 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key
	I0916 11:41:25.382279  333016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt with IP's: []
	I0916 11:41:25.486866  333016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt ...
	I0916 11:41:25.486894  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt: {Name:mkcd5e73a62407403f2b7382a6bee9d25e01d246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.487102  333016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key ...
	I0916 11:41:25.487119  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key: {Name:mk02438bf6f24dc9f1622119085bb7f5eb856e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:41:25.487333  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:41:25.487376  333016 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:41:25.487393  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:41:25.487423  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:41:25.487451  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:41:25.487489  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:41:25.487545  333016 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:41:25.488261  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:41:25.513968  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:41:25.538557  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:41:25.562712  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:41:25.585718  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:41:25.611011  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:41:25.636044  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:41:25.670989  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:41:25.696346  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:41:25.726347  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:41:25.751075  333016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:41:25.774722  333016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:41:25.792779  333016 ssh_runner.go:195] Run: openssl version
	I0916 11:41:25.800733  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:41:25.814085  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.818059  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.818119  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:41:25.825641  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:41:25.839273  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:41:25.851228  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.855171  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.855271  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:41:25.862163  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:41:25.871484  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:41:25.880429  333016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.883742  333016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.883801  333016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:41:25.890371  333016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:41:25.901843  333016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:41:25.906238  333016 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:41:25.906290  333016 kubeadm.go:392] StartCluster: {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:41:25.906380  333016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:41:25.906433  333016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:41:25.947314  333016 cri.go:89] found id: ""
	I0916 11:41:25.947371  333016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:41:25.956327  333016 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:41:25.965412  333016 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:41:25.965494  333016 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:41:25.974409  333016 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:41:25.974427  333016 kubeadm.go:157] found existing configuration files:
	
	I0916 11:41:25.974464  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:41:25.983428  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:41:25.983491  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:41:25.991673  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:41:26.002161  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:41:26.002229  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:41:26.013896  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:41:26.023373  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:41:26.023434  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:41:26.033671  333016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:41:26.044330  333016 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:41:26.044397  333016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:41:26.052990  333016 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:41:26.116552  333016 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:41:26.116953  333016 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:41:26.159382  333016 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:41:26.159511  333016 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:41:26.159572  333016 kubeadm.go:310] OS: Linux
	I0916 11:41:26.159642  333016 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:41:26.159724  333016 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:41:26.159793  333016 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:41:26.159860  333016 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:41:26.159924  333016 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:41:26.159993  333016 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:41:26.160055  333016 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:41:26.160116  333016 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:41:26.255274  333016 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:41:26.255371  333016 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:41:26.255493  333016 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:41:26.457194  333016 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:41:26.460187  333016 out.go:235]   - Generating certificates and keys ...
	I0916 11:41:26.460307  333016 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:41:26.460412  333016 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:41:26.745903  333016 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:41:27.101695  333016 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:41:27.277283  333016 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:41:27.532738  333016 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:41:27.685826  333016 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:41:27.686041  333016 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-406673] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:41:27.949848  333016 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:41:27.950175  333016 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-406673] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:41:28.302029  333016 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:41:28.615418  333016 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:41:28.692846  333016 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:41:28.692963  333016 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:41:28.844556  333016 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:41:28.948784  333016 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:41:29.064396  333016 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:41:24.651896  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:27.152349  326192 node_ready.go:53] node "custom-flannel-838467" has status "Ready":"False"
	I0916 11:41:27.651470  326192 node_ready.go:49] node "custom-flannel-838467" has status "Ready":"True"
	I0916 11:41:27.651491  326192 node_ready.go:38] duration metric: took 7.503462411s for node "custom-flannel-838467" to be "Ready" ...
	I0916 11:41:27.651501  326192 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:41:27.659052  326192 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:29.445363  333016 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:41:29.457728  333016 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:41:29.458698  333016 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:41:29.458771  333016 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:41:29.544165  333016 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:41:29.546617  333016 out.go:235]   - Booting up control plane ...
	I0916 11:41:29.546749  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:41:29.552789  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:41:29.553876  333016 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:41:29.554528  333016 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:41:29.556653  333016 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:41:29.665548  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:32.165305  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:34.665436  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:36.665933  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:42.059188  333016 kubeadm.go:310] [apiclient] All control plane components are healthy after 12.502447 seconds
	I0916 11:41:42.059386  333016 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:41:42.071733  333016 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:41:42.590849  333016 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:41:42.591044  333016 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-406673 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0916 11:41:43.098669  333016 kubeadm.go:310] [bootstrap-token] Using token: 24uzd8.f12jm4gfeszy41x7
	I0916 11:41:43.100371  333016 out.go:235]   - Configuring RBAC rules ...
	I0916 11:41:43.100541  333016 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:41:43.104683  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:41:43.111318  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:41:43.113371  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:41:43.115697  333016 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:41:43.118292  333016 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:41:43.126934  333016 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:41:43.360284  333016 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:41:43.516475  333016 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:41:43.517781  333016 kubeadm.go:310] 
	I0916 11:41:43.517878  333016 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:41:43.517889  333016 kubeadm.go:310] 
	I0916 11:41:43.518023  333016 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:41:43.518044  333016 kubeadm.go:310] 
	I0916 11:41:43.518068  333016 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:41:43.518140  333016 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:41:43.518207  333016 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:41:43.518214  333016 kubeadm.go:310] 
	I0916 11:41:43.518276  333016 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:41:43.518282  333016 kubeadm.go:310] 
	I0916 11:41:43.518322  333016 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:41:43.518349  333016 kubeadm.go:310] 
	I0916 11:41:43.518438  333016 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:41:43.518542  333016 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:41:43.518635  333016 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:41:43.518650  333016 kubeadm.go:310] 
	I0916 11:41:43.518802  333016 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:41:43.518905  333016 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:41:43.518915  333016 kubeadm.go:310] 
	I0916 11:41:43.519009  333016 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 24uzd8.f12jm4gfeszy41x7 \
	I0916 11:41:43.519175  333016 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:41:43.519216  333016 kubeadm.go:310]     --control-plane 
	I0916 11:41:43.519226  333016 kubeadm.go:310] 
	I0916 11:41:43.519328  333016 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:41:43.519343  333016 kubeadm.go:310] 
	I0916 11:41:43.519454  333016 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 24uzd8.f12jm4gfeszy41x7 \
	I0916 11:41:43.519608  333016 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:41:43.521710  333016 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:41:43.521904  333016 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:41:43.521936  333016 cni.go:84] Creating CNI manager for ""
	I0916 11:41:43.521946  333016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:41:43.523972  333016 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:41:43.525520  333016 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:41:43.529863  333016 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0916 11:41:43.529889  333016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:41:43.551346  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:41:43.999610  333016 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:41:43.999688  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:43.999735  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-406673 minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=old-k8s-version-406673 minikube.k8s.io/primary=true
	I0916 11:41:44.008244  333016 ops.go:34] apiserver oom_adj: -16
	I0916 11:41:44.110534  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:39.164837  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:41.165886  326192 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:41:43.167455  326192 pod_ready.go:93] pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.167492  326192 pod_ready.go:82] duration metric: took 15.508409943s for pod "coredns-7c65d6cfc9-v8wnh" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.167506  326192 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.173572  326192 pod_ready.go:93] pod "etcd-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.173597  326192 pod_ready.go:82] duration metric: took 6.084061ms for pod "etcd-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.173608  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.179725  326192 pod_ready.go:93] pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.179750  326192 pod_ready.go:82] duration metric: took 6.135589ms for pod "kube-apiserver-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.179759  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.185203  326192 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.185229  326192 pod_ready.go:82] duration metric: took 5.46328ms for pod "kube-controller-manager-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.185240  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-4w8bp" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.190735  326192 pod_ready.go:93] pod "kube-proxy-4w8bp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.190759  326192 pod_ready.go:82] duration metric: took 5.51193ms for pod "kube-proxy-4w8bp" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.190771  326192 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.563503  326192 pod_ready.go:93] pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace has status "Ready":"True"
	I0916 11:41:43.563527  326192 pod_ready.go:82] duration metric: took 372.750298ms for pod "kube-scheduler-custom-flannel-838467" in "kube-system" namespace to be "Ready" ...
	I0916 11:41:43.563545  326192 pod_ready.go:39] duration metric: took 15.912032814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:41:43.563563  326192 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:41:43.563624  326192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:41:43.576500  326192 api_server.go:72] duration metric: took 24.427395386s to wait for apiserver process to appear ...
	I0916 11:41:43.576526  326192 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:41:43.576546  326192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:41:43.580307  326192 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:41:43.581394  326192 api_server.go:141] control plane version: v1.31.1
	I0916 11:41:43.581418  326192 api_server.go:131] duration metric: took 4.885665ms to wait for apiserver health ...
	I0916 11:41:43.581425  326192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:41:43.766131  326192 system_pods.go:59] 7 kube-system pods found
	I0916 11:41:43.766162  326192 system_pods.go:61] "coredns-7c65d6cfc9-v8wnh" [70e55c30-2327-486e-a2f2-45ca826531d5] Running
	I0916 11:41:43.766167  326192 system_pods.go:61] "etcd-custom-flannel-838467" [c47fb50c-7a36-43f2-8b62-a341436839c9] Running
	I0916 11:41:43.766170  326192 system_pods.go:61] "kube-apiserver-custom-flannel-838467" [36053552-7860-4bd5-9898-ffb7ab082a55] Running
	I0916 11:41:43.766174  326192 system_pods.go:61] "kube-controller-manager-custom-flannel-838467" [1b575692-31f1-4a70-be42-76c9439fa88d] Running
	I0916 11:41:43.766178  326192 system_pods.go:61] "kube-proxy-4w8bp" [0aa1010b-96bf-491d-b9ca-f9fb9b9cfbf8] Running
	I0916 11:41:43.766181  326192 system_pods.go:61] "kube-scheduler-custom-flannel-838467" [dc64976a-912d-4ba4-869a-a96a59c28ecd] Running
	I0916 11:41:43.766183  326192 system_pods.go:61] "storage-provisioner" [506055cc-e639-4857-adbc-0c254600538f] Running
	I0916 11:41:43.766191  326192 system_pods.go:74] duration metric: took 184.758722ms to wait for pod list to return data ...
	I0916 11:41:43.766197  326192 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:41:43.964353  326192 default_sa.go:45] found service account: "default"
	I0916 11:41:43.964386  326192 default_sa.go:55] duration metric: took 198.182376ms for default service account to be created ...
	I0916 11:41:43.964400  326192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:41:44.167530  326192 system_pods.go:86] 7 kube-system pods found
	I0916 11:41:44.167574  326192 system_pods.go:89] "coredns-7c65d6cfc9-v8wnh" [70e55c30-2327-486e-a2f2-45ca826531d5] Running
	I0916 11:41:44.167584  326192 system_pods.go:89] "etcd-custom-flannel-838467" [c47fb50c-7a36-43f2-8b62-a341436839c9] Running
	I0916 11:41:44.167591  326192 system_pods.go:89] "kube-apiserver-custom-flannel-838467" [36053552-7860-4bd5-9898-ffb7ab082a55] Running
	I0916 11:41:44.167597  326192 system_pods.go:89] "kube-controller-manager-custom-flannel-838467" [1b575692-31f1-4a70-be42-76c9439fa88d] Running
	I0916 11:41:44.167602  326192 system_pods.go:89] "kube-proxy-4w8bp" [0aa1010b-96bf-491d-b9ca-f9fb9b9cfbf8] Running
	I0916 11:41:44.167608  326192 system_pods.go:89] "kube-scheduler-custom-flannel-838467" [dc64976a-912d-4ba4-869a-a96a59c28ecd] Running
	I0916 11:41:44.167612  326192 system_pods.go:89] "storage-provisioner" [506055cc-e639-4857-adbc-0c254600538f] Running
	I0916 11:41:44.167621  326192 system_pods.go:126] duration metric: took 203.213461ms to wait for k8s-apps to be running ...
	I0916 11:41:44.167631  326192 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:41:44.167685  326192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:41:44.180782  326192 system_svc.go:56] duration metric: took 13.141604ms WaitForService to wait for kubelet
	I0916 11:41:44.180814  326192 kubeadm.go:582] duration metric: took 25.031715543s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:41:44.180838  326192 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:41:44.364740  326192 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:41:44.364769  326192 node_conditions.go:123] node cpu capacity is 8
	I0916 11:41:44.364779  326192 node_conditions.go:105] duration metric: took 183.936169ms to run NodePressure ...
	I0916 11:41:44.364790  326192 start.go:241] waiting for startup goroutines ...
	I0916 11:41:44.364796  326192 start.go:246] waiting for cluster config update ...
	I0916 11:41:44.364805  326192 start.go:255] writing updated cluster config ...
	I0916 11:41:44.365079  326192 ssh_runner.go:195] Run: rm -f paused
	I0916 11:41:44.371879  326192 out.go:177] * Done! kubectl is now configured to use "custom-flannel-838467" cluster and "default" namespace by default
	E0916 11:41:44.373468  326192 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:41:44.611272  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:45.110742  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:45.610915  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:46.110672  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:46.611285  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:47.111092  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:47.610788  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:48.111373  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:48.611189  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:49.110790  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:49.611662  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:50.111045  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:50.611562  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:51.111442  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:51.611212  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:52.111501  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:52.611443  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:53.111633  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:53.611581  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:54.111313  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:54.611583  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:55.111268  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:55.610651  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:56.110600  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:56.610770  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:57.111250  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:57.610984  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:58.111247  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:58.611501  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:59.111271  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:41:59.611607  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.110881  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.611603  333016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:42:00.717585  333016 kubeadm.go:1113] duration metric: took 16.717955139s to wait for elevateKubeSystemPrivileges
	I0916 11:42:00.717628  333016 kubeadm.go:394] duration metric: took 34.811339511s to StartCluster
	I0916 11:42:00.717650  333016 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:42:00.717734  333016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:42:00.719920  333016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:42:00.720139  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:42:00.720142  333016 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:42:00.720381  333016 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:42:00.720426  333016 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:42:00.720490  333016 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-406673"
	I0916 11:42:00.720512  333016 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-406673"
	I0916 11:42:00.720537  333016 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:42:00.720922  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.720974  333016 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-406673"
	I0916 11:42:00.721002  333016 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-406673"
	I0916 11:42:00.721279  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.722177  333016 out.go:177] * Verifying Kubernetes components...
	I0916 11:42:00.723934  333016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:42:00.752502  333016 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-406673"
	I0916 11:42:00.752539  333016 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:42:00.755899  333016 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:42:00.756270  333016 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:42:00.757582  333016 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:42:00.757605  333016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:42:00.757662  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:42:00.776137  333016 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:42:00.776158  333016 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:42:00.776215  333016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:42:00.777250  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:42:00.793326  333016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:42:01.011292  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:42:01.019742  333016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:42:01.096506  333016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:42:01.120265  333016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:42:01.516905  333016 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:42:01.535935  333016 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:42:01.796472  333016 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:42:01.798178  333016 addons.go:510] duration metric: took 1.077738203s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:42:02.021938  333016 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-406673" context rescaled to 1 replicas
	I0916 11:42:03.540269  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:06.039405  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:08.039450  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:10.578149  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:13.039705  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:15.040491  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:17.539137  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:19.539764  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:22.039970  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:24.539528  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:27.039570  333016 node_ready.go:53] node "old-k8s-version-406673" has status "Ready":"False"
	I0916 11:42:29.038931  333016 node_ready.go:49] node "old-k8s-version-406673" has status "Ready":"True"
	I0916 11:42:29.038954  333016 node_ready.go:38] duration metric: took 27.502986487s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:42:29.038963  333016 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:42:29.045578  333016 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:31.049070  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:42:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:42:33.049733  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:42:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:42:35.051703  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:37.552157  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:40.051048  333016 pod_ready.go:103] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:40.551252  333016 pod_ready.go:93] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"True"
	I0916 11:42:40.551275  333016 pod_ready.go:82] duration metric: took 11.505673624s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:40.551286  333016 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:42:42.558047  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:45.057493  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:47.057603  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:49.556869  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:51.557684  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:54.056762  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:56.058223  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:42:58.557744  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:01.057276  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:03.058237  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:05.557660  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:08.057228  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:10.057485  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:12.556652  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:14.557496  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:17.057859  333016 pod_ready.go:103] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:43:19.058214  333016 pod_ready.go:93] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.058243  333016 pod_ready.go:82] duration metric: took 38.506948862s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.058265  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.063031  333016 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.063055  333016 pod_ready.go:82] duration metric: took 4.781482ms for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.063071  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.069862  333016 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.069881  333016 pod_ready.go:82] duration metric: took 6.802265ms for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.069890  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.074303  333016 pod_ready.go:93] pod "kube-proxy-pcbvp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.074328  333016 pod_ready.go:82] duration metric: took 4.43151ms for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.074338  333016 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.078134  333016 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:43:19.078154  333016 pod_ready.go:82] duration metric: took 3.809778ms for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:43:19.078164  333016 pod_ready.go:39] duration metric: took 50.039189729s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:43:19.078180  333016 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:43:19.078230  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:19.078279  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:19.114156  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:19.114176  333016 cri.go:89] found id: ""
	I0916 11:43:19.114183  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:19.114235  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.117974  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:19.118035  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:19.152156  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:19.152181  333016 cri.go:89] found id: ""
	I0916 11:43:19.152192  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:19.152246  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.155805  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:19.155863  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:19.190036  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:19.190057  333016 cri.go:89] found id: ""
	I0916 11:43:19.190064  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:19.190111  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.193389  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:19.193445  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:19.227236  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:19.227263  333016 cri.go:89] found id: ""
	I0916 11:43:19.227270  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:19.227325  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.230784  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:19.230843  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:19.264360  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:19.264380  333016 cri.go:89] found id: ""
	I0916 11:43:19.264388  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:19.264437  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.267844  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:19.267916  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:19.300894  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:19.300916  333016 cri.go:89] found id: ""
	I0916 11:43:19.300925  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:19.300982  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.304410  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:19.304463  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:19.338532  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:19.338561  333016 cri.go:89] found id: ""
	I0916 11:43:19.338570  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:19.338617  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:19.342059  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:19.342087  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:19.375568  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:19.375598  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:19.412566  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:19.412600  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:19.447709  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:19.447738  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:19.485244  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:19.485272  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:19.583549  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:19.583577  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:19.619156  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:19.619188  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:19.664569  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:19.664605  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:19.698129  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:19.698158  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:19.747705  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:19.747738  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:19.798683  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:19.798720  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:19.862046  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:19.862082  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:22.384464  333016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:43:22.396937  333016 api_server.go:72] duration metric: took 1m21.676729889s to wait for apiserver process to appear ...
	I0916 11:43:22.396965  333016 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:43:22.397008  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:22.397062  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:22.430612  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:22.430638  333016 cri.go:89] found id: ""
	I0916 11:43:22.430646  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:22.430694  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.434324  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:22.434382  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:22.469323  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:22.469375  333016 cri.go:89] found id: ""
	I0916 11:43:22.469385  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:22.469455  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.473369  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:22.473438  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:22.507487  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:22.507514  333016 cri.go:89] found id: ""
	I0916 11:43:22.507524  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:22.507610  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.511481  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:22.511553  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:22.546774  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:22.546797  333016 cri.go:89] found id: ""
	I0916 11:43:22.546806  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:22.546854  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.550741  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:22.550815  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:22.584441  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:22.584466  333016 cri.go:89] found id: ""
	I0916 11:43:22.584478  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:22.584518  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.587995  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:22.588052  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:22.621210  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:22.621232  333016 cri.go:89] found id: ""
	I0916 11:43:22.621238  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:22.621288  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.624788  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:22.624860  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:22.659577  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:22.659601  333016 cri.go:89] found id: ""
	I0916 11:43:22.659622  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:22.659672  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:22.663356  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:22.663381  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:22.759410  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:22.759439  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:22.794834  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:22.794863  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:22.834275  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:22.834316  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:22.868286  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:22.868315  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:22.917081  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:22.917114  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:22.967952  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:22.967987  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:23.027899  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:23.027937  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:23.048542  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:23.048576  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:23.086646  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:23.086676  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:23.122143  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:23.122173  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:23.169305  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:23.169352  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:25.703925  333016 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:43:25.710132  333016 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:43:25.711030  333016 api_server.go:141] control plane version: v1.20.0
	I0916 11:43:25.711051  333016 api_server.go:131] duration metric: took 3.314079399s to wait for apiserver health ...
	I0916 11:43:25.711059  333016 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:43:25.711077  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:43:25.711124  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:43:25.744083  333016 cri.go:89] found id: "31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:25.744104  333016 cri.go:89] found id: ""
	I0916 11:43:25.744114  333016 logs.go:276] 1 containers: [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02]
	I0916 11:43:25.744169  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.747732  333016 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:43:25.747806  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:43:25.780830  333016 cri.go:89] found id: "1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:25.780855  333016 cri.go:89] found id: ""
	I0916 11:43:25.780864  333016 logs.go:276] 1 containers: [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298]
	I0916 11:43:25.780905  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.784503  333016 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:43:25.784565  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:43:25.819038  333016 cri.go:89] found id: "d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:25.819061  333016 cri.go:89] found id: ""
	I0916 11:43:25.819068  333016 logs.go:276] 1 containers: [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0]
	I0916 11:43:25.819116  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.822868  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:43:25.822952  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:43:25.857513  333016 cri.go:89] found id: "6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:25.857536  333016 cri.go:89] found id: ""
	I0916 11:43:25.857545  333016 logs.go:276] 1 containers: [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621]
	I0916 11:43:25.857604  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.861133  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:43:25.861199  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:43:25.895136  333016 cri.go:89] found id: "de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:25.895165  333016 cri.go:89] found id: ""
	I0916 11:43:25.895175  333016 logs.go:276] 1 containers: [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c]
	I0916 11:43:25.895233  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.898774  333016 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:43:25.898849  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:43:25.932895  333016 cri.go:89] found id: "9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:25.932918  333016 cri.go:89] found id: ""
	I0916 11:43:25.932927  333016 logs.go:276] 1 containers: [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7]
	I0916 11:43:25.932981  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.936427  333016 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:43:25.936488  333016 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:43:25.972284  333016 cri.go:89] found id: "342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:25.972305  333016 cri.go:89] found id: ""
	I0916 11:43:25.972312  333016 logs.go:276] 1 containers: [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1]
	I0916 11:43:25.972351  333016 ssh_runner.go:195] Run: which crictl
	I0916 11:43:25.975973  333016 logs.go:123] Gathering logs for dmesg ...
	I0916 11:43:25.976004  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:43:25.996792  333016 logs.go:123] Gathering logs for kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] ...
	I0916 11:43:25.996823  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02"
	I0916 11:43:26.043167  333016 logs.go:123] Gathering logs for etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] ...
	I0916 11:43:26.043205  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298"
	I0916 11:43:26.079042  333016 logs.go:123] Gathering logs for kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] ...
	I0916 11:43:26.079070  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621"
	I0916 11:43:26.116242  333016 logs.go:123] Gathering logs for kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] ...
	I0916 11:43:26.116270  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1"
	I0916 11:43:26.152271  333016 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:43:26.152296  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:43:26.202878  333016 logs.go:123] Gathering logs for kubelet ...
	I0916 11:43:26.202913  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:43:26.264457  333016 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:43:26.264495  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:43:26.363604  333016 logs.go:123] Gathering logs for coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] ...
	I0916 11:43:26.363636  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0"
	I0916 11:43:26.398030  333016 logs.go:123] Gathering logs for kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] ...
	I0916 11:43:26.398055  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c"
	I0916 11:43:26.431498  333016 logs.go:123] Gathering logs for kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] ...
	I0916 11:43:26.431531  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7"
	I0916 11:43:26.479671  333016 logs.go:123] Gathering logs for container status ...
	I0916 11:43:26.479703  333016 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:43:29.023411  333016 system_pods.go:59] 8 kube-system pods found
	I0916 11:43:29.023440  333016 system_pods.go:61] "coredns-74ff55c5b-6xlgw" [684992a2-7081-4df3-a73e-a21569a28ce6] Running
	I0916 11:43:29.023445  333016 system_pods.go:61] "etcd-old-k8s-version-406673" [d8c0d4cd-1c4a-4881-9f18-d54a4433f8ab] Running
	I0916 11:43:29.023448  333016 system_pods.go:61] "kindnet-mjcgf" [5888dd63-6767-4920-ac13-becf70cd6481] Running
	I0916 11:43:29.023452  333016 system_pods.go:61] "kube-apiserver-old-k8s-version-406673" [00ed1d06-176e-453e-a0bf-29244d78687c] Running
	I0916 11:43:29.023455  333016 system_pods.go:61] "kube-controller-manager-old-k8s-version-406673" [5b6c1595-560a-41d9-b653-9bf2a5c85f67] Running
	I0916 11:43:29.023459  333016 system_pods.go:61] "kube-proxy-pcbvp" [d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1] Running
	I0916 11:43:29.023462  333016 system_pods.go:61] "kube-scheduler-old-k8s-version-406673" [d6f812b4-bf33-454d-8375-fe804f003016] Running
	I0916 11:43:29.023465  333016 system_pods.go:61] "storage-provisioner" [28d14db2-66e4-43f6-8288-4ddc0f3a994c] Running
	I0916 11:43:29.023471  333016 system_pods.go:74] duration metric: took 3.312405641s to wait for pod list to return data ...
	I0916 11:43:29.023478  333016 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:43:29.025649  333016 default_sa.go:45] found service account: "default"
	I0916 11:43:29.025676  333016 default_sa.go:55] duration metric: took 2.190408ms for default service account to be created ...
	I0916 11:43:29.025686  333016 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:43:29.033323  333016 system_pods.go:86] 8 kube-system pods found
	I0916 11:43:29.033381  333016 system_pods.go:89] "coredns-74ff55c5b-6xlgw" [684992a2-7081-4df3-a73e-a21569a28ce6] Running
	I0916 11:43:29.033390  333016 system_pods.go:89] "etcd-old-k8s-version-406673" [d8c0d4cd-1c4a-4881-9f18-d54a4433f8ab] Running
	I0916 11:43:29.033396  333016 system_pods.go:89] "kindnet-mjcgf" [5888dd63-6767-4920-ac13-becf70cd6481] Running
	I0916 11:43:29.033405  333016 system_pods.go:89] "kube-apiserver-old-k8s-version-406673" [00ed1d06-176e-453e-a0bf-29244d78687c] Running
	I0916 11:43:29.033411  333016 system_pods.go:89] "kube-controller-manager-old-k8s-version-406673" [5b6c1595-560a-41d9-b653-9bf2a5c85f67] Running
	I0916 11:43:29.033418  333016 system_pods.go:89] "kube-proxy-pcbvp" [d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1] Running
	I0916 11:43:29.033423  333016 system_pods.go:89] "kube-scheduler-old-k8s-version-406673" [d6f812b4-bf33-454d-8375-fe804f003016] Running
	I0916 11:43:29.033431  333016 system_pods.go:89] "storage-provisioner" [28d14db2-66e4-43f6-8288-4ddc0f3a994c] Running
	I0916 11:43:29.033444  333016 system_pods.go:126] duration metric: took 7.751194ms to wait for k8s-apps to be running ...
	I0916 11:43:29.033457  333016 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:43:29.033512  333016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:43:29.045813  333016 system_svc.go:56] duration metric: took 12.349678ms WaitForService to wait for kubelet
	I0916 11:43:29.045837  333016 kubeadm.go:582] duration metric: took 1m28.325673057s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:43:29.045852  333016 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:43:29.048437  333016 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:43:29.048464  333016 node_conditions.go:123] node cpu capacity is 8
	I0916 11:43:29.048478  333016 node_conditions.go:105] duration metric: took 2.620808ms to run NodePressure ...
	I0916 11:43:29.048492  333016 start.go:241] waiting for startup goroutines ...
	I0916 11:43:29.048501  333016 start.go:246] waiting for cluster config update ...
	I0916 11:43:29.048515  333016 start.go:255] writing updated cluster config ...
	I0916 11:43:29.048782  333016 ssh_runner.go:195] Run: rm -f paused
	I0916 11:43:29.055620  333016 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-406673" cluster and "default" namespace by default
	E0916 11:43:29.057070  333016 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.856770052Z" level=info msg="Checking pod kube-system_coredns-74ff55c5b-6xlgw for CNI network kindnet (type=ptp)"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.859357089Z" level=info msg="Ran pod sandbox eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32 with infra container: kube-system/storage-provisioner/POD" id=c902770d-194b-4540-8ac4-7301f0545b96 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.859526385Z" level=info msg="Ran pod sandbox 15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33 with infra container: kube-system/coredns-74ff55c5b-6xlgw/POD" id=f576a970-6d7c-4b43-af9e-da0ea0eb3ad3 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860222892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c4d6a36e-e674-485c-a8db-f3ac539a2447 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860278624Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=10f143f9-9fa6-4a76-a15b-32952af72ee1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860399431Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c4d6a36e-e674-485c-a8db-f3ac539a2447 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860422997Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=10f143f9-9fa6-4a76-a15b-32952af72ee1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.860976080Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6bb0be3-9f1a-4237-81de-68bd60b184b1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861016518Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=306f539a-c560-4371-8f00-331724f83370 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861171586Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=306f539a-c560-4371-8f00-331724f83370 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.861259251Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b6bb0be3-9f1a-4237-81de-68bd60b184b1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862001870Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=eb2e30c4-75e5-4521-ab72-cc7869c1fce1 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862040425Z" level=info msg="Creating container: kube-system/coredns-74ff55c5b-6xlgw/coredns" id=1df64f26-2035-4f6b-95f6-226bec645aec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862080433Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.862120024Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.878701585Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1e2bfb353a2952745b9f6b0c04ba55371973020ad1e5e874c5dd82658c63be84/merged/etc/passwd: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.878750411Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1e2bfb353a2952745b9f6b0c04ba55371973020ad1e5e874c5dd82658c63be84/merged/etc/group: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.879154582Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/99e7b464885912b28b588c11a83ff47920ae95ffea4c649719c5189f8ead6e3c/merged/etc/passwd: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.879187538Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/99e7b464885912b28b588c11a83ff47920ae95ffea4c649719c5189f8ead6e3c/merged/etc/group: no such file or directory"
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.918889133Z" level=info msg="Created container d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0: kube-system/coredns-74ff55c5b-6xlgw/coredns" id=1df64f26-2035-4f6b-95f6-226bec645aec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.919466702Z" level=info msg="Starting container: d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0" id=38ac1fc0-fac9-4a00-8484-820e0b437755 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.922270175Z" level=info msg="Created container 33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d: kube-system/storage-provisioner/storage-provisioner" id=eb2e30c4-75e5-4521-ab72-cc7869c1fce1 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.922845211Z" level=info msg="Starting container: 33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d" id=7db5cf50-ec68-4b78-aebf-9b05d6d07e42 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.926237079Z" level=info msg="Started container" PID=2929 containerID=d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0 description=kube-system/coredns-74ff55c5b-6xlgw/coredns id=38ac1fc0-fac9-4a00-8484-820e0b437755 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33
	Sep 16 11:42:33 old-k8s-version-406673 crio[1020]: time="2024-09-16 11:42:33.929776182Z" level=info msg="Started container" PID=2936 containerID=33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d description=kube-system/storage-provisioner/storage-provisioner id=7db5cf50-ec68-4b78-aebf-9b05d6d07e42 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4db88b336bed       bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16                                     About a minute ago   Running             coredns                   0                   15c3605023254       coredns-74ff55c5b-6xlgw
	33a7974b5f09f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     About a minute ago   Running             storage-provisioner       0                   eee3fde4da330       storage-provisioner
	342a012c428e0       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   About a minute ago   Running             kindnet-cni               0                   3d1945d7b04c2       kindnet-mjcgf
	de3eaebd990dc       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc                                     About a minute ago   Running             kube-proxy                0                   8c9b9fc80cd42       kube-proxy-pcbvp
	6f6e59b67f114       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899                                     About a minute ago   Running             kube-scheduler            0                   dbdf46e21272e       kube-scheduler-old-k8s-version-406673
	31259a2842c01       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99                                     About a minute ago   Running             kube-apiserver            0                   2bf825db35d7b       kube-apiserver-old-k8s-version-406673
	1612fad1a4d07       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                     About a minute ago   Running             etcd                      0                   1eaac4c5376fc       etcd-old-k8s-version-406673
	9aff740155270       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080                                     About a minute ago   Running             kube-controller-manager   0                   483dd0ba7fd68       kube-controller-manager-old-k8s-version-406673
	
	
	==> coredns [d4db88b336bed0a777d1d359197b47f37b041c615d70d97ec3a9604fbb87d2e0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:38442 - 48402 "HINFO IN 8440324266966115617.7448481208015864567. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011622953s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-406673
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-406673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-406673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:41:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-406673
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:43:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:42:28 +0000   Mon, 16 Sep 2024 11:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-406673
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 318da86b3a3c4fd0827c12705ac51529
	  System UUID:                2d5bda39-09b0-43d0-95f9-1ff418499524
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-6xlgw                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     94s
	  kube-system                 etcd-old-k8s-version-406673                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         105s
	  kube-system                 kindnet-mjcgf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      94s
	  kube-system                 kube-apiserver-old-k8s-version-406673             250m (3%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-old-k8s-version-406673    200m (2%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-pcbvp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-old-k8s-version-406673             100m (1%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 metrics-server-9975d5f86-zkwwm                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         0s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 106s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s  kubelet     Node old-k8s-version-406673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                66s   kubelet     Node old-k8s-version-406673 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [1612fad1a4d07a4f7252ef90046d6b025ba3398b0b81c63324e24b9cfb761298] <==
	2024-09-16 11:41:36.508921 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 is starting a new election at term 1
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 became candidate at term 2
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2
	raft2024/09/16 11:41:37 INFO: f23060b075c4c089 became leader at term 2
	raft2024/09/16 11:41:37 INFO: raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2
	2024-09-16 11:41:37.294464 I | etcdserver: published {Name:old-k8s-version-406673 ClientURLs:[https://192.168.103.2:2379]} to cluster 3336683c081d149d
	2024-09-16 11:41:37.294487 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-16 11:41:37.294537 I | embed: ready to serve client requests
	2024-09-16 11:41:37.294728 I | embed: ready to serve client requests
	2024-09-16 11:41:37.295159 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-16 11:41:37.296260 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-16 11:41:37.297103 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-16 11:41:37.298217 I | embed: serving client requests on 192.168.103.2:2379
	2024-09-16 11:41:55.011036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:04.397724 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:14.397752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:24.397850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:34.397672 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:44.397732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:42:54.397786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:04.397868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:14.397710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:24.397875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:43:34.397816 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:43:34 up  1:25,  0 users,  load average: 0.94, 1.11, 0.91
	Linux old-k8s-version-406673 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [342a012c428e05c84f7600970cd1f6299c6027b043fe08bbeeeb07ae835f8cb1] <==
	I0916 11:42:05.095640       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:42:05.095656       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:42:05.095674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:42:05.394421       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:42:05.394469       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:42:05.394477       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:42:05.695253       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:42:05.695279       1 metrics.go:61] Registering metrics
	I0916 11:42:05.695331       1 controller.go:374] Syncing nftables rules
	I0916 11:42:15.397552       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:15.397613       1 main.go:299] handling current node
	I0916 11:42:25.398751       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:25.398783       1 main.go:299] handling current node
	I0916 11:42:35.395218       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:35.395262       1 main.go:299] handling current node
	I0916 11:42:45.397419       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:45.397464       1 main.go:299] handling current node
	I0916 11:42:55.402217       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:42:55.402249       1 main.go:299] handling current node
	I0916 11:43:05.394944       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:05.394981       1 main.go:299] handling current node
	I0916 11:43:15.397437       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:15.397487       1 main.go:299] handling current node
	I0916 11:43:25.397439       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:43:25.397514       1 main.go:299] handling current node
	
	
	==> kube-apiserver [31259a2842c01cdff0d14a0b4a44faffe22e68351bdb4933fd112991cd172c02] <==
	I0916 11:41:41.453400       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0916 11:41:41.453431       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 11:41:41.458485       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0916 11:41:41.461410       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:41:41.461427       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0916 11:41:41.806470       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:41:41.841007       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0916 11:41:41.917086       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:41:41.918224       1 controller.go:606] quota admission added evaluator for: endpoints
	I0916 11:41:41.921847       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:41:42.967364       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0916 11:41:43.351236       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0916 11:41:43.504028       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0916 11:41:48.768075       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:42:00.244433       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:42:00.297844       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0916 11:42:11.190173       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:42:11.190214       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:42:11.190222       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:42:42.093321       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:42:42.093393       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:42:42.093403       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:43:20.270631       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:43:20.270672       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:43:20.270679       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [9aff740155270f309e3bf9622b4b95a1442f7189e5f6d6afe62e5b01c7809eb7] <==
	I0916 11:42:00.302502       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0916 11:42:00.303655       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pcbvp"
	I0916 11:42:00.303679       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mjcgf"
	I0916 11:42:00.307508       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-406673" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0916 11:42:00.312688       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-q8x49"
	I0916 11:42:00.321152       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-6xlgw"
	I0916 11:42:00.393566       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0916 11:42:00.393684       1 shared_informer.go:247] Caches are synced for HPA 
	I0916 11:42:00.393856       1 shared_informer.go:247] Caches are synced for disruption 
	I0916 11:42:00.393875       1 disruption.go:339] Sending events to api server.
	E0916 11:42:00.408825       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"366e9dff-395f-41eb-aaa4-5fe8a77c24b1", ResourceVersion:"267", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862083703, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014c20c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014c20e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014c2100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014c2160), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014c2180)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014c21c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0010e7ce0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0005f4238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000430fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00060a638)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0005f4280)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0916 11:42:00.423745       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:42:00.453287       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0916 11:42:00.469890       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:42:00.626879       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0916 11:42:00.895582       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:42:00.895685       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 11:42:00.927104       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:42:01.537394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0916 11:42:01.602983       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-q8x49"
	I0916 11:42:30.295800       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0916 11:43:33.399665       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0916 11:43:33.424843       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0916 11:43:34.410616       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-zkwwm"
	
	
	==> kube-proxy [de3eaebd990dc4d88fe7b984f93bd7eb958c0dfe611a107b40ca0486581c209c] <==
	I0916 11:42:00.995500       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:42:00.995590       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:42:01.010731       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:42:01.010826       1 server_others.go:185] Using iptables Proxier.
	I0916 11:42:01.012001       1 server.go:650] Version: v1.20.0
	I0916 11:42:01.013499       1 config.go:315] Starting service config controller
	I0916 11:42:01.013577       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:42:01.013592       1 config.go:224] Starting endpoint slice config controller
	I0916 11:42:01.013614       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:42:01.113797       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:42:01.113806       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [6f6e59b67f114821943f7dd0399956bac2514037a059d1e22127afccbbda4621] <==
	W0916 11:41:40.476670       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:41:40.476699       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:41:40.476709       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:41:40.476720       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:41:40.516274       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:41:40.516365       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:41:40.516377       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:41:40.516397       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0916 11:41:40.517924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.524733       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:41:40.593689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.593833       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:41:40.594045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:41:40.594338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:41:40.594501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:40.594699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:41:40.594858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:41:40.595116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:41:40.595261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:41:40.595399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:41:41.428933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:41.508045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.594591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.695406       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0916 11:41:44.916550       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495548    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5888dd63-6767-4920-ac13-becf70cd6481-lib-modules") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495604    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-c5qt9" (UniqueName: "kubernetes.io/secret/5888dd63-6767-4920-ac13-becf70cd6481-kindnet-token-c5qt9") pod "kindnet-mjcgf" (UID: "5888dd63-6767-4920-ac13-becf70cd6481")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: I0916 11:42:00.495632    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1-xtables-lock") pod "kube-proxy-pcbvp" (UID: "d3eb8ccf-e8ed-452f-8029-b6c8a44f56c1")
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: W0916 11:42:00.633660    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-3d1945d7b04c2d25d7a1cc6d0bafc6adce69c9f092118e0e86af68ccc80d1014 WatchSource:0}: Error finding container 3d1945d7b04c2d25d7a1cc6d0bafc6adce69c9f092118e0e86af68ccc80d1014: Status 404 returned error &{%!s(*http.body=&{0xc0009ffd80 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:00 old-k8s-version-406673 kubelet[2069]: W0916 11:42:00.640993    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-8c9b9fc80cd428329dc256f5b234864e1037d0a44e37ad7d8aa19e4546d83c7a WatchSource:0}: Error finding container 8c9b9fc80cd428329dc256f5b234864e1037d0a44e37ad7d8aa19e4546d83c7a: Status 404 returned error &{%!s(*http.body=&{0xc000e4daa0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:03 old-k8s-version-406673 kubelet[2069]: E0916 11:42:03.893546    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:08 old-k8s-version-406673 kubelet[2069]: E0916 11:42:08.894227    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:13 old-k8s-version-406673 kubelet[2069]: E0916 11:42:13.894965    2069 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.532522    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.534500    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669791    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-767ft" (UniqueName: "kubernetes.io/secret/28d14db2-66e4-43f6-8288-4ddc0f3a994c-storage-provisioner-token-767ft") pod "storage-provisioner" (UID: "28d14db2-66e4-43f6-8288-4ddc0f3a994c")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669832    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/28d14db2-66e4-43f6-8288-4ddc0f3a994c-tmp") pod "storage-provisioner" (UID: "28d14db2-66e4-43f6-8288-4ddc0f3a994c")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669854    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/684992a2-7081-4df3-a73e-a21569a28ce6-config-volume") pod "coredns-74ff55c5b-6xlgw" (UID: "684992a2-7081-4df3-a73e-a21569a28ce6")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: I0916 11:42:33.669868    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-75kvx" (UniqueName: "kubernetes.io/secret/684992a2-7081-4df3-a73e-a21569a28ce6-coredns-token-75kvx") pod "coredns-74ff55c5b-6xlgw" (UID: "684992a2-7081-4df3-a73e-a21569a28ce6")
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: W0916 11:42:33.858343    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32 WatchSource:0}: Error finding container eee3fde4da3300d65961325c2da1b02fc2faeb05c1e3162ec7ab538dafae2f32: Status 404 returned error &{%!s(*http.body=&{0xc0001a8060 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:42:33 old-k8s-version-406673 kubelet[2069]: W0916 11:42:33.859070    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33 WatchSource:0}: Error finding container 15c36050232540e80f8a69f077b83fba51bf04e9293ac1eac93c264662957a33: Status 404 returned error &{%!s(*http.body=&{0xc0001b7f60 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: I0916 11:43:34.413722    2069 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: I0916 11:43:34.609591    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/cc94e6fe-629d-4146-adc6-f32166bf5081-tmp-dir") pod "metrics-server-9975d5f86-zkwwm" (UID: "cc94e6fe-629d-4146-adc6-f32166bf5081")
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: I0916 11:43:34.609726    2069 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-2vx2d" (UniqueName: "kubernetes.io/secret/cc94e6fe-629d-4146-adc6-f32166bf5081-metrics-server-token-2vx2d") pod "metrics-server-9975d5f86-zkwwm" (UID: "cc94e6fe-629d-4146-adc6-f32166bf5081")
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: W0916 11:43:34.741208    2069 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/crio-5c9fa70f6195136374cf170e97932229b3d12ad93ed8a98fdb07934cb7083502 WatchSource:0}: Error finding container 5c9fa70f6195136374cf170e97932229b3d12ad93ed8a98fdb07934cb7083502: Status 404 returned error &{%!s(*http.body=&{0xc0013d5c60 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7728e0) %!s(func() error=0x772860)}
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: E0916 11:43:34.817929    2069 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: E0916 11:43:34.817999    2069 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: E0916 11:43:34.818173    2069 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-2vx2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-zkwwm_kube-system(cc94e6
fe-629d-4146-adc6-f32166bf5081): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: E0916 11:43:34.818218    2069 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 16 11:43:34 old-k8s-version-406673 kubelet[2069]: E0916 11:43:34.945889    2069 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [33a7974b5f09f6adda6bc4521f20647b17395f9a91d88f7ef8146e1df96bf21d] <==
	I0916 11:42:33.942881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:42:33.952289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:42:33.952327       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:42:33.995195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:42:33.995263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88c65391-c353-4f97-bac8-9bd49b9f0588", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77 became leader
	I0916 11:42:33.995326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	I0916 11:42:34.095721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (468.556µs)
helpers_test.go:263: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-406673 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0916 11:44:04.833045   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:04.839416   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:04.850826   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:04.872231   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:04.913719   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:04.995168   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:05.156694   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:05.478335   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:06.119971   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:07.401497   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:09.963788   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:15.085643   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:19.365029   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:25.327882   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:45.809514   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:02.444708   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:26.771593   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.294094   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.300466   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.311800   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.333159   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.374516   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.456729   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.618255   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:34.940549   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:35.582158   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:36.864663   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:39.426376   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:41.286844   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:44.548384   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.648551   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.654931   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.666315   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.688055   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.729477   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.790055   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.811446   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:54.973358   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:55.295017   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:55.937042   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:57.219050   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:45:59.780324   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:04.901637   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:06.689572   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:15.142909   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:15.271366   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:25.515676   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:35.625030   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:48.693458   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:46:56.233312   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:47:16.586277   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:47:57.428002   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:48:18.154769   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:48:25.129002   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:48:38.508231   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:49:04.833188   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:49:32.534794   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-406673 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m15.100044303s)

                                                
                                                
-- stdout --
	* [old-k8s-version-406673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-406673" primary control-plane node in "old-k8s-version-406673" cluster
	* Pulling base image v0.0.45-1726358845-19644 ...
	* Restarting existing docker container for "old-k8s-version-406673" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-406673 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:43:41.448675  342599 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:43:41.449069  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:43:41.449083  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:43:41.449090  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:43:41.449520  342599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:43:41.450534  342599 out.go:352] Setting JSON to false
	I0916 11:43:41.451659  342599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5161,"bootTime":1726481860,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:43:41.451763  342599 start.go:139] virtualization: kvm guest
	I0916 11:43:41.454105  342599 out.go:177] * [old-k8s-version-406673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:43:41.455638  342599 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:43:41.455671  342599 notify.go:220] Checking for updates...
	I0916 11:43:41.458330  342599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:43:41.459636  342599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:41.460924  342599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:43:41.462503  342599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:43:41.464018  342599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:43:41.466022  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:41.468148  342599 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 11:43:41.469509  342599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:43:41.493994  342599 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:43:41.494082  342599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:43:41.552267  342599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:43:41.542033993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:43:41.552366  342599 docker.go:318] overlay module found
	I0916 11:43:41.554456  342599 out.go:177] * Using the docker driver based on existing profile
	I0916 11:43:41.555523  342599 start.go:297] selected driver: docker
	I0916 11:43:41.555540  342599 start.go:901] validating driver "docker" against &{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:41.555622  342599 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:43:41.556394  342599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:43:41.611358  342599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:43:41.600217835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:43:41.611712  342599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:43:41.611741  342599 cni.go:84] Creating CNI manager for ""
	I0916 11:43:41.611767  342599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:43:41.611800  342599 start.go:340] cluster config:
	{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:41.614659  342599 out.go:177] * Starting "old-k8s-version-406673" primary control-plane node in "old-k8s-version-406673" cluster
	I0916 11:43:41.616047  342599 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:43:41.617540  342599 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:43:41.619066  342599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:43:41.619093  342599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:43:41.619118  342599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 11:43:41.619138  342599 cache.go:56] Caching tarball of preloaded images
	I0916 11:43:41.619235  342599 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:43:41.619248  342599 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 11:43:41.619349  342599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	W0916 11:43:41.640867  342599 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:43:41.640901  342599 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:43:41.641001  342599 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:43:41.641018  342599 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:43:41.641022  342599 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:43:41.641030  342599 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:43:41.641034  342599 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:43:41.718830  342599 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:43:41.718879  342599 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:43:41.718924  342599 start.go:360] acquireMachinesLock for old-k8s-version-406673: {Name:mk8e16c995170a3c051ae96503b85729d385d06f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:43:41.719008  342599 start.go:364] duration metric: took 59.119µs to acquireMachinesLock for "old-k8s-version-406673"
	I0916 11:43:41.719031  342599 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:43:41.719049  342599 fix.go:54] fixHost starting: 
	I0916 11:43:41.719280  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:41.737386  342599 fix.go:112] recreateIfNeeded on old-k8s-version-406673: state=Stopped err=<nil>
	W0916 11:43:41.737478  342599 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:43:41.739550  342599 out.go:177] * Restarting existing docker container for "old-k8s-version-406673" ...
	I0916 11:43:41.740931  342599 cli_runner.go:164] Run: docker start old-k8s-version-406673
	I0916 11:43:42.037870  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:42.057638  342599 kic.go:430] container "old-k8s-version-406673" state is running.
	I0916 11:43:42.058125  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:42.077127  342599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:43:42.077438  342599 machine.go:93] provisionDockerMachine start ...
	I0916 11:43:42.077513  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:42.096731  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:42.096978  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:42.096997  342599 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:43:42.097660  342599 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48048->127.0.0.1:33093: read: connection reset by peer
	I0916 11:43:45.232865  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:43:45.232896  342599 ubuntu.go:169] provisioning hostname "old-k8s-version-406673"
	I0916 11:43:45.232959  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.254903  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.255229  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.255258  342599 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-406673 && echo "old-k8s-version-406673" | sudo tee /etc/hostname
	I0916 11:43:45.401461  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:43:45.401545  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.419533  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.419740  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.419760  342599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-406673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-406673/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-406673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:43:45.557487  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:43:45.557514  342599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:43:45.557560  342599 ubuntu.go:177] setting up certificates
	I0916 11:43:45.557573  342599 provision.go:84] configureAuth start
	I0916 11:43:45.557627  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:45.574760  342599 provision.go:143] copyHostCerts
	I0916 11:43:45.574844  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:43:45.574860  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:43:45.574945  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:43:45.575091  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:43:45.575105  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:43:45.575153  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:43:45.575244  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:43:45.575255  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:43:45.575295  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:43:45.575376  342599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-406673 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-406673]
	I0916 11:43:45.748283  342599 provision.go:177] copyRemoteCerts
	I0916 11:43:45.748356  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:43:45.748393  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.765636  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:45.862269  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:43:45.885003  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:43:45.907169  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:43:45.931358  342599 provision.go:87] duration metric: took 373.76893ms to configureAuth
	I0916 11:43:45.931402  342599 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:43:45.931619  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:45.931737  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.950090  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.950326  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.950350  342599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:43:46.250285  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:43:46.250314  342599 machine.go:96] duration metric: took 4.172856931s to provisionDockerMachine
	I0916 11:43:46.250329  342599 start.go:293] postStartSetup for "old-k8s-version-406673" (driver="docker")
	I0916 11:43:46.250342  342599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:43:46.250412  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:43:46.250460  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.269457  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.370592  342599 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:43:46.373854  342599 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:43:46.373887  342599 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:43:46.373895  342599 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:43:46.373901  342599 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:43:46.373912  342599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:43:46.373966  342599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:43:46.374049  342599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:43:46.374134  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:43:46.382190  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:43:46.404854  342599 start.go:296] duration metric: took 154.508203ms for postStartSetup
	I0916 11:43:46.404944  342599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:43:46.404984  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.423369  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.518250  342599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:43:46.522658  342599 fix.go:56] duration metric: took 4.803604453s for fixHost
	I0916 11:43:46.522684  342599 start.go:83] releasing machines lock for "old-k8s-version-406673", held for 4.803664456s
	I0916 11:43:46.522755  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:46.540413  342599 ssh_runner.go:195] Run: cat /version.json
	I0916 11:43:46.540463  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.540483  342599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:43:46.540550  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.559326  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.559343  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.649310  342599 ssh_runner.go:195] Run: systemctl --version
	I0916 11:43:46.731311  342599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:43:46.869148  342599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:43:46.873764  342599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:43:46.882554  342599 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:43:46.882626  342599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:43:46.891468  342599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:43:46.891491  342599 start.go:495] detecting cgroup driver to use...
	I0916 11:43:46.891523  342599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:43:46.891589  342599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:43:46.903563  342599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:43:46.914685  342599 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:43:46.914743  342599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:43:46.927471  342599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:43:46.938829  342599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:43:47.019225  342599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:43:47.095917  342599 docker.go:233] disabling docker service ...
	I0916 11:43:47.095984  342599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:43:47.108451  342599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:43:47.119842  342599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:43:47.196356  342599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:43:47.275282  342599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:43:47.286402  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:43:47.301909  342599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 11:43:47.301978  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.311648  342599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:43:47.311699  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.321003  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.330113  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.339110  342599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:43:47.348230  342599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:43:47.356509  342599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:43:47.364678  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:47.441764  342599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:43:47.538547  342599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:43:47.538607  342599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:43:47.542039  342599 start.go:563] Will wait 60s for crictl version
	I0916 11:43:47.542091  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:47.545302  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:43:47.578706  342599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:43:47.578785  342599 ssh_runner.go:195] Run: crio --version
	I0916 11:43:47.613962  342599 ssh_runner.go:195] Run: crio --version
	I0916 11:43:47.653182  342599 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0916 11:43:47.654482  342599 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:43:47.672357  342599 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:43:47.676229  342599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:43:47.687076  342599 kubeadm.go:883] updating cluster {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:43:47.687218  342599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:43:47.687280  342599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:43:47.727184  342599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:43:47.727258  342599 ssh_runner.go:195] Run: which lz4
	I0916 11:43:47.730999  342599 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:43:47.734265  342599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:43:47.734295  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 11:43:48.663263  342599 crio.go:462] duration metric: took 932.291429ms to copy over tarball
	I0916 11:43:48.663330  342599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:43:51.176610  342599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513253657s)
	I0916 11:43:51.176636  342599 crio.go:469] duration metric: took 2.513345828s to extract the tarball
	I0916 11:43:51.176643  342599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:43:51.248591  342599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:43:51.284423  342599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:43:51.284455  342599 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:43:51.284517  342599 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:51.284558  342599 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.284565  342599 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.284571  342599 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.284544  342599 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.284593  342599 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:43:51.284623  342599 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.284686  342599 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.285864  342599 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.285942  342599 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.285948  342599 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.285942  342599 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.285946  342599 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.286009  342599 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:43:51.286019  342599 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.286049  342599 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:51.492242  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.522713  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 11:43:51.534975  342599 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:43:51.535071  342599 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.535150  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.544750  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.545678  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.559215  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.568350  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.570259  342599 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:43:51.570308  342599 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:43:51.570346  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.570365  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.573562  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.622238  342599 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:43:51.622290  342599 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.622339  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.623682  342599 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:43:51.623772  342599 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.623841  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.757921  342599 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:43:51.757942  342599 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:43:51.757968  342599 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.757968  342599 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.758009  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758009  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758101  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.758165  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:51.758219  342599 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:43:51.758251  342599 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.758269  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.758285  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758367  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.819059  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:51.819128  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.819135  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.819062  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.819186  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.819225  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.819239  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:52.005990  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:52.007566  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:43:52.012996  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:52.013008  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:52.013082  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:52.013133  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:52.013213  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:52.113680  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:52.201435  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:52.201538  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:43:52.206771  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:43:52.208106  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:43:52.208187  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:52.225734  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:43:52.299412  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:43:52.299468  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:43:52.378199  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:52.517048  342599 cache_images.go:92] duration metric: took 1.232574481s to LoadCachedImages
	W0916 11:43:52.517148  342599 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0916 11:43:52.517167  342599 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 crio true true} ...
	I0916 11:43:52.517302  342599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-406673 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:43:52.517418  342599 ssh_runner.go:195] Run: crio config
	I0916 11:43:52.561512  342599 cni.go:84] Creating CNI manager for ""
	I0916 11:43:52.561534  342599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:43:52.561543  342599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:43:52.561561  342599 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-406673 NodeName:old-k8s-version-406673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:43:52.561689  342599 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-406673"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:43:52.561758  342599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:43:52.570704  342599 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:43:52.570772  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:43:52.579313  342599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0916 11:43:52.596268  342599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:43:52.612866  342599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0916 11:43:52.629581  342599 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:43:52.632853  342599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:43:52.643379  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:52.720660  342599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:43:52.734195  342599 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673 for IP: 192.168.103.2
	I0916 11:43:52.734216  342599 certs.go:194] generating shared ca certs ...
	I0916 11:43:52.734231  342599 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:52.734355  342599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:43:52.734391  342599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:43:52.734402  342599 certs.go:256] generating profile certs ...
	I0916 11:43:52.734473  342599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key
	I0916 11:43:52.734530  342599 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db
	I0916 11:43:52.734564  342599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key
	I0916 11:43:52.734710  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:43:52.734744  342599 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:43:52.734754  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:43:52.734773  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:43:52.734795  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:43:52.734814  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:43:52.734850  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:43:52.735413  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:43:52.758887  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:43:52.782936  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:43:52.810335  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:43:52.835181  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:43:52.858252  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:43:52.880337  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:43:52.903907  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:43:52.927676  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:43:52.950944  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:43:52.974697  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:43:52.997934  342599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:43:53.016161  342599 ssh_runner.go:195] Run: openssl version
	I0916 11:43:53.021716  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:43:53.032092  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.035726  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.035794  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.042425  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:43:53.050857  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:43:53.059886  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.063252  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.063300  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.069514  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:43:53.078142  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:43:53.087290  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.090824  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.090896  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.097688  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:43:53.106525  342599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:43:53.109881  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:43:53.116612  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:43:53.123543  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:43:53.130272  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:43:53.136649  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:43:53.143689  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:43:53.151260  342599 kubeadm.go:392] StartCluster: {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:53.151380  342599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:43:53.151472  342599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:43:53.185768  342599 cri.go:89] found id: ""
	I0916 11:43:53.185846  342599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:43:53.194666  342599 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:43:53.194693  342599 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:43:53.194743  342599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:43:53.203055  342599 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:43:53.203881  342599 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-406673" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:53.204510  342599 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-406673" cluster setting kubeconfig missing "old-k8s-version-406673" context setting]
	I0916 11:43:53.205412  342599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.206930  342599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:43:53.215880  342599 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 11:43:53.215923  342599 kubeadm.go:597] duration metric: took 21.223045ms to restartPrimaryControlPlane
	I0916 11:43:53.215932  342599 kubeadm.go:394] duration metric: took 64.683125ms to StartCluster
	I0916 11:43:53.215949  342599 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.216018  342599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:53.218206  342599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.218661  342599 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:43:53.219512  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:53.219410  342599 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:43:53.219686  342599 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-406673"
	I0916 11:43:53.219705  342599 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-406673"
	W0916 11:43:53.219717  342599 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:43:53.219747  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.219785  342599 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-406673"
	I0916 11:43:53.219883  342599 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-406673"
	I0916 11:43:53.219823  342599 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-406673"
	I0916 11:43:53.220280  342599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-406673"
	I0916 11:43:53.219834  342599 addons.go:69] Setting dashboard=true in profile "old-k8s-version-406673"
	I0916 11:43:53.220375  342599 addons.go:234] Setting addon dashboard=true in "old-k8s-version-406673"
	W0916 11:43:53.220386  342599 addons.go:243] addon dashboard should already be in state true
	I0916 11:43:53.220422  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	W0916 11:43:53.220260  342599 addons.go:243] addon metrics-server should already be in state true
	I0916 11:43:53.220488  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.220653  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220710  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220869  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220926  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.221032  342599 out.go:177] * Verifying Kubernetes components...
	I0916 11:43:53.222752  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:53.244346  342599 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-406673"
	W0916 11:43:53.244373  342599 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:43:53.244398  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.244751  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.245037  342599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:43:53.246474  342599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:53.246481  342599 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:43:53.248096  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:43:53.248127  342599 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:43:53.248185  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.248192  342599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.248201  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:43:53.248098  342599 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:43:53.248252  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.250338  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:43:53.250359  342599 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:43:53.250404  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.273873  342599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:53.273898  342599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:43:53.273955  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.274169  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.275302  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.280036  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.301411  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.328656  342599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:43:53.340523  342599 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:43:53.387478  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:43:53.387506  342599 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:43:53.387745  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.396664  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:43:53.396691  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:43:53.406440  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:43:53.406463  342599 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:43:53.407903  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:53.416422  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:43:53.416449  342599 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:43:53.427712  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:43:53.427740  342599 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:43:53.439315  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:53.439342  342599 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:43:53.503707  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:43:53.503732  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 11:43:53.510579  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:53.525664  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:43:53.525696  342599 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:43:53.525914  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.525944  342599 retry.go:31] will retry after 152.87848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.532836  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.532872  342599 retry.go:31] will retry after 157.07542ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.601969  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:43:53.601994  342599 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:43:53.621346  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:43:53.621373  342599 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 11:43:53.634937  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.634974  342599 retry.go:31] will retry after 321.390454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.639540  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:43:53.639567  342599 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:43:53.656867  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:53.656893  342599 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:43:53.673744  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:53.679888  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.691095  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:53.745183  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.745217  342599 retry.go:31] will retry after 136.130565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.796348  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.796382  342599 retry.go:31] will retry after 443.518837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.810771  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.810811  342599 retry.go:31] will retry after 382.546252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.881722  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:53.941956  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.941994  342599 retry.go:31] will retry after 236.364167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.957151  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:43:54.015814  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.015853  342599 retry.go:31] will retry after 375.113173ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.179194  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:54.193519  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:54.240911  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:54.252866  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.252918  342599 retry.go:31] will retry after 401.151273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.296437  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.296479  342599 retry.go:31] will retry after 764.07049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.333432  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.333478  342599 retry.go:31] will retry after 477.82927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.392081  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:43:54.451932  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.451973  342599 retry.go:31] will retry after 337.169739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.654238  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:54.712692  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.712728  342599 retry.go:31] will retry after 935.95517ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.789893  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:54.812303  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:54.852000  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.852037  342599 retry.go:31] will retry after 1.132792971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.874248  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.874282  342599 retry.go:31] will retry after 1.153231222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.061616  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:55.118580  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.118616  342599 retry.go:31] will retry after 952.42092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.341220  342599 node_ready.go:53] error getting node "old-k8s-version-406673": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-406673": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:43:55.649816  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:55.707503  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.707546  342599 retry.go:31] will retry after 1.525466419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.985469  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:56.027729  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:56.048118  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.048158  342599 retry.go:31] will retry after 1.537917974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.071232  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:56.087643  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.087676  342599 retry.go:31] will retry after 1.497738328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:56.130041  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.130083  342599 retry.go:31] will retry after 1.703517602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.233406  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:57.294430  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.294464  342599 retry.go:31] will retry after 1.40258396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.342100  342599 node_ready.go:53] error getting node "old-k8s-version-406673": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-406673": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:43:57.586456  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:57.586462  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:57.646094  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.646123  342599 retry.go:31] will retry after 1.833576806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:57.646162  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.646188  342599 retry.go:31] will retry after 2.656765994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.834560  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:57.892906  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.892939  342599 retry.go:31] will retry after 2.18125411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:58.698022  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:58.758259  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:58.758297  342599 retry.go:31] will retry after 1.653760659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:59.480055  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:44:00.074833  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:44:00.303327  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:44:00.413145  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:44:03.516026  342599 node_ready.go:49] node "old-k8s-version-406673" has status "Ready":"True"
	I0916 11:44:03.516063  342599 node_ready.go:38] duration metric: took 10.17550256s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:44:03.516076  342599 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:44:03.717989  342599 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.816595  342599 pod_ready.go:93] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:03.816691  342599 pod_ready.go:82] duration metric: took 98.666189ms for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.816719  342599 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.918233  342599 pod_ready.go:93] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:03.918276  342599 pod_ready.go:82] duration metric: took 101.538159ms for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.918295  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:04.620945  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.140838756s)
	I0916 11:44:04.621040  342599 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-406673"
	I0916 11:44:04.621047  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.317689547s)
	I0916 11:44:04.620999  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.546120779s)
	I0916 11:44:04.898187  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.484990296s)
	I0916 11:44:04.900305  342599 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-406673 addons enable metrics-server
	
	I0916 11:44:04.901863  342599 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0916 11:44:04.903406  342599 addons.go:510] duration metric: took 11.683989587s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0916 11:44:05.923500  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:07.924626  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:09.926452  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:12.424233  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:14.924223  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:15.423797  342599 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:15.423828  342599 pod_ready.go:82] duration metric: took 11.505525488s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:15.423838  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:17.429733  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:19.430224  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:21.929713  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:24.009627  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:26.430326  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:28.430726  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:30.930780  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:33.433263  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:35.929752  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:37.929837  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:39.930279  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:41.930540  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:43.930791  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:46.429510  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:48.430161  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:50.430295  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:52.929990  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:54.930580  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:57.429547  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:59.430191  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:01.930680  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:04.430050  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:06.431610  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:08.929699  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:11.430903  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:13.929747  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:15.931063  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:18.430474  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:20.929144  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:22.929833  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:24.930407  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:26.931153  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:29.430186  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:31.929901  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:34.430487  342599 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.430512  342599 pod_ready.go:82] duration metric: took 1m19.006667807s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.430523  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.435258  342599 pod_ready.go:93] pod "kube-proxy-pcbvp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.435281  342599 pod_ready.go:82] duration metric: took 4.751917ms for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.435290  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.439468  342599 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.439490  342599 pod_ready.go:82] duration metric: took 4.192562ms for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.439505  342599 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:36.445827  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:38.946013  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:41.444852  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:43.445737  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:45.946748  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:48.445118  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:50.445210  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:52.445816  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:54.446068  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:56.945501  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:58.945685  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:01.445377  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:03.445752  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:05.945806  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:08.446010  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:10.446073  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:12.945844  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:15.446131  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:17.946289  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:20.445864  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:22.445951  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:24.946488  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:27.445839  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:29.945436  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:31.945951  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:33.947646  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:36.445905  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:38.948094  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:41.446271  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:43.978003  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:46.445688  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:48.946292  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:51.445713  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:53.945072  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:55.945739  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:58.445191  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:00.445680  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:02.446254  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:04.946036  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:07.447667  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:09.945983  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:12.445228  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:14.445689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:16.445931  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:18.945281  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:20.945433  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:22.946291  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:25.444655  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:27.445696  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:29.445774  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:31.945999  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:34.444676  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:36.445444  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:38.945689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:41.446060  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:43.948656  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:46.445159  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:48.946051  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:51.446010  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:53.446145  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:55.945438  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:57.945706  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:59.946103  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:02.445233  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:04.945988  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:07.445200  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:09.446085  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:11.944825  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:13.945689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:16.444784  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:18.444860  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:20.445186  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:22.447125  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:24.945528  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:26.945691  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:29.446345  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:31.945589  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:34.444967  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:36.445485  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:38.945937  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:41.445492  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:43.445794  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:45.945563  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:48.445313  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:50.946012  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:53.445570  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:55.947554  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:58.445126  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:00.945813  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:02.946300  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:05.445265  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:07.446242  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:09.946173  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:12.446147  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:14.945283  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:17.447088  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:19.945240  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:21.945474  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:24.445814  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:26.945457  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:29.445643  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:31.945681  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:34.445158  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:34.445185  342599 pod_ready.go:82] duration metric: took 4m0.005672608s for pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace to be "Ready" ...
	E0916 11:49:34.445196  342599 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:49:34.445205  342599 pod_ready.go:39] duration metric: took 5m30.929118215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:49:34.445222  342599 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:49:34.445252  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:49:34.445299  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:49:34.479712  342599 cri.go:89] found id: "f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:34.479738  342599 cri.go:89] found id: ""
	I0916 11:49:34.479748  342599 logs.go:276] 1 containers: [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d]
	I0916 11:49:34.479800  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.483247  342599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:49:34.483318  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:49:34.517155  342599 cri.go:89] found id: "7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:34.517180  342599 cri.go:89] found id: ""
	I0916 11:49:34.517188  342599 logs.go:276] 1 containers: [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6]
	I0916 11:49:34.517247  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.520774  342599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:49:34.520856  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:49:34.554354  342599 cri.go:89] found id: "97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:34.554377  342599 cri.go:89] found id: ""
	I0916 11:49:34.554387  342599 logs.go:276] 1 containers: [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84]
	I0916 11:49:34.554452  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.557960  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:49:34.558017  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:49:34.594211  342599 cri.go:89] found id: "0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:34.594233  342599 cri.go:89] found id: ""
	I0916 11:49:34.594241  342599 logs.go:276] 1 containers: [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f]
	I0916 11:49:34.594291  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.597717  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:49:34.597782  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:49:34.631348  342599 cri.go:89] found id: "5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:34.631372  342599 cri.go:89] found id: ""
	I0916 11:49:34.631382  342599 logs.go:276] 1 containers: [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849]
	I0916 11:49:34.631438  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.634962  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:49:34.635076  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:49:34.668370  342599 cri.go:89] found id: "b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:34.668392  342599 cri.go:89] found id: ""
	I0916 11:49:34.668401  342599 logs.go:276] 1 containers: [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19]
	I0916 11:49:34.668456  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.671903  342599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:49:34.671964  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:49:34.707573  342599 cri.go:89] found id: "368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:34.707601  342599 cri.go:89] found id: ""
	I0916 11:49:34.707611  342599 logs.go:276] 1 containers: [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4]
	I0916 11:49:34.707658  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.711089  342599 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:49:34.711146  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:49:34.746008  342599 cri.go:89] found id: "97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:34.746034  342599 cri.go:89] found id: ""
	I0916 11:49:34.746041  342599 logs.go:276] 1 containers: [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf]
	I0916 11:49:34.746091  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.749832  342599 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:49:34.749936  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:49:34.782428  342599 cri.go:89] found id: "5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:34.782453  342599 cri.go:89] found id: ""
	I0916 11:49:34.782462  342599 logs.go:276] 1 containers: [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd]
	I0916 11:49:34.782512  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.786501  342599 logs.go:123] Gathering logs for dmesg ...
	I0916 11:49:34.786532  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:49:34.807221  342599 logs.go:123] Gathering logs for kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] ...
	I0916 11:49:34.807251  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:34.843519  342599 logs.go:123] Gathering logs for kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] ...
	I0916 11:49:34.843550  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:34.904038  342599 logs.go:123] Gathering logs for kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] ...
	I0916 11:49:34.904072  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:34.938520  342599 logs.go:123] Gathering logs for kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] ...
	I0916 11:49:34.938549  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:34.976046  342599 logs.go:123] Gathering logs for storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] ...
	I0916 11:49:34.976077  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:35.011710  342599 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:49:35.011741  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:49:35.076295  342599 logs.go:123] Gathering logs for kubelet ...
	I0916 11:49:35.076330  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:49:35.115636  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.608970    1237 reflector.go:138] object-"kube-system"/"storage-provisioner-token-767ft": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-767ft" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.115819  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609271    1237 reflector.go:138] object-"kube-system"/"coredns-token-75kvx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-75kvx" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116040  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609320    1237 reflector.go:138] object-"kube-system"/"metrics-server-token-2vx2d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2vx2d" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116205  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609457    1237 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116360  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609499    1237 reflector.go:138] object-"kube-system"/"kindnet-token-c5qt9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-c5qt9" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.123705  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:09 old-k8s-version-406673 kubelet[1237]: E0916 11:44:09.475464    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.123850  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:10 old-k8s-version-406673 kubelet[1237]: E0916 11:44:10.312296    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.126870  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:33 old-k8s-version-406673 kubelet[1237]: E0916 11:44:33.264025    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.127338  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:34 old-k8s-version-406673 kubelet[1237]: E0916 11:44:34.404862    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127612  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:35 old-k8s-version-406673 kubelet[1237]: E0916 11:44:35.407622    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127855  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:39 old-k8s-version-406673 kubelet[1237]: E0916 11:44:39.894316    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127989  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:44 old-k8s-version-406673 kubelet[1237]: E0916 11:44:44.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.128412  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:55 old-k8s-version-406673 kubelet[1237]: E0916 11:44:55.437796    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.129943  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:58 old-k8s-version-406673 kubelet[1237]: E0916 11:44:58.310102    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.130185  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:59 old-k8s-version-406673 kubelet[1237]: E0916 11:44:59.894304    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.130422  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.205817    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.130556  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.206178    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.130984  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:24 old-k8s-version-406673 kubelet[1237]: E0916 11:45:24.482238    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.131116  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:25 old-k8s-version-406673 kubelet[1237]: E0916 11:45:25.206099    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.131360  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:29 old-k8s-version-406673 kubelet[1237]: E0916 11:45:29.894364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.131500  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:37 old-k8s-version-406673 kubelet[1237]: E0916 11:45:37.206069    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.131753  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:42 old-k8s-version-406673 kubelet[1237]: E0916 11:45:42.205686    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.133205  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:51 old-k8s-version-406673 kubelet[1237]: E0916 11:45:51.269661    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.133479  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:56 old-k8s-version-406673 kubelet[1237]: E0916 11:45:56.206262    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.133630  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:03 old-k8s-version-406673 kubelet[1237]: E0916 11:46:03.206044    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134062  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:11 old-k8s-version-406673 kubelet[1237]: E0916 11:46:11.550095    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134197  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:18 old-k8s-version-406673 kubelet[1237]: E0916 11:46:18.206493    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134434  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:19 old-k8s-version-406673 kubelet[1237]: E0916 11:46:19.894425    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134687  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.205670    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134821  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.206071    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134958  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:42 old-k8s-version-406673 kubelet[1237]: E0916 11:46:42.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.135210  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:44 old-k8s-version-406673 kubelet[1237]: E0916 11:46:44.205741    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.135344  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:56 old-k8s-version-406673 kubelet[1237]: E0916 11:46:56.206336    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.135580  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:58 old-k8s-version-406673 kubelet[1237]: E0916 11:46:58.207462    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.136101  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:11 old-k8s-version-406673 kubelet[1237]: E0916 11:47:11.206125    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.136340  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:12 old-k8s-version-406673 kubelet[1237]: E0916 11:47:12.205756    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.137850  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:22 old-k8s-version-406673 kubelet[1237]: E0916 11:47:22.276097    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.138089  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:24 old-k8s-version-406673 kubelet[1237]: E0916 11:47:24.205721    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.138317  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.206240    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.138647  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.670364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.138886  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:39 old-k8s-version-406673 kubelet[1237]: E0916 11:47:39.894246    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139020  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:46 old-k8s-version-406673 kubelet[1237]: E0916 11:47:46.206145    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139257  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:52 old-k8s-version-406673 kubelet[1237]: E0916 11:47:52.205673    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139390  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:57 old-k8s-version-406673 kubelet[1237]: E0916 11:47:57.206159    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139625  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:07 old-k8s-version-406673 kubelet[1237]: E0916 11:48:07.205557    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139761  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:08 old-k8s-version-406673 kubelet[1237]: E0916 11:48:08.206452    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139894  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:19 old-k8s-version-406673 kubelet[1237]: E0916 11:48:19.206101    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.140128  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: E0916 11:48:22.205857    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140267  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.140523  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140778  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140950  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.141221  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.141578  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.141892  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.142029  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.142265  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.142397  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:35.142408  342599 logs.go:123] Gathering logs for etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] ...
	I0916 11:49:35.142422  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:35.182113  342599 logs.go:123] Gathering logs for kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] ...
	I0916 11:49:35.182144  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:35.223823  342599 logs.go:123] Gathering logs for container status ...
	I0916 11:49:35.223856  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:49:35.262634  342599 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:49:35.262663  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:49:35.367246  342599 logs.go:123] Gathering logs for coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] ...
	I0916 11:49:35.367278  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:35.402793  342599 logs.go:123] Gathering logs for kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] ...
	I0916 11:49:35.402829  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:35.462604  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:35.462635  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:49:35.462715  342599 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0916 11:49:35.462728  342599 out.go:270]   Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.462739  342599 out.go:270]   Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	  Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.462755  342599 out.go:270]   Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.462770  342599 out.go:270]   Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	  Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.462780  342599 out.go:270]   Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:35.462788  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:35.462799  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:49:45.464328  342599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:49:45.476151  342599 api_server.go:72] duration metric: took 5m52.257437357s to wait for apiserver process to appear ...
	I0916 11:49:45.476182  342599 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:49:45.476243  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:49:45.476303  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:49:45.512448  342599 cri.go:89] found id: "f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:45.512475  342599 cri.go:89] found id: ""
	I0916 11:49:45.512483  342599 logs.go:276] 1 containers: [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d]
	I0916 11:49:45.512531  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.516037  342599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:49:45.516112  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:49:45.549762  342599 cri.go:89] found id: "7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:45.549791  342599 cri.go:89] found id: ""
	I0916 11:49:45.549801  342599 logs.go:276] 1 containers: [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6]
	I0916 11:49:45.549848  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.553456  342599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:49:45.553520  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:49:45.587005  342599 cri.go:89] found id: "97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:45.587029  342599 cri.go:89] found id: ""
	I0916 11:49:45.587038  342599 logs.go:276] 1 containers: [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84]
	I0916 11:49:45.587095  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.590764  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:49:45.590840  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:49:45.623784  342599 cri.go:89] found id: "0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:45.623809  342599 cri.go:89] found id: ""
	I0916 11:49:45.623818  342599 logs.go:276] 1 containers: [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f]
	I0916 11:49:45.623891  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.627377  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:49:45.627428  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:49:45.660479  342599 cri.go:89] found id: "5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:45.660505  342599 cri.go:89] found id: ""
	I0916 11:49:45.660513  342599 logs.go:276] 1 containers: [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849]
	I0916 11:49:45.660575  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.664047  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:49:45.664102  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:49:45.699816  342599 cri.go:89] found id: "b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:45.699842  342599 cri.go:89] found id: ""
	I0916 11:49:45.699851  342599 logs.go:276] 1 containers: [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19]
	I0916 11:49:45.699906  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.703371  342599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:49:45.703425  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:49:45.736043  342599 cri.go:89] found id: "368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:45.736062  342599 cri.go:89] found id: ""
	I0916 11:49:45.736069  342599 logs.go:276] 1 containers: [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4]
	I0916 11:49:45.736110  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.739784  342599 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:49:45.739851  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:49:45.772325  342599 cri.go:89] found id: "5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:45.772352  342599 cri.go:89] found id: ""
	I0916 11:49:45.772362  342599 logs.go:276] 1 containers: [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd]
	I0916 11:49:45.772418  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.775808  342599 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:49:45.775861  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:49:45.809227  342599 cri.go:89] found id: "97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:45.809253  342599 cri.go:89] found id: ""
	I0916 11:49:45.809261  342599 logs.go:276] 1 containers: [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf]
	I0916 11:49:45.809321  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.812839  342599 logs.go:123] Gathering logs for etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] ...
	I0916 11:49:45.812865  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:45.851277  342599 logs.go:123] Gathering logs for kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] ...
	I0916 11:49:45.851308  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:45.909692  342599 logs.go:123] Gathering logs for container status ...
	I0916 11:49:45.909724  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:49:45.950858  342599 logs.go:123] Gathering logs for kubelet ...
	I0916 11:49:45.950886  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:49:45.989747  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.608970    1237 reflector.go:138] object-"kube-system"/"storage-provisioner-token-767ft": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-767ft" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.989949  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609271    1237 reflector.go:138] object-"kube-system"/"coredns-token-75kvx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-75kvx" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990131  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609320    1237 reflector.go:138] object-"kube-system"/"metrics-server-token-2vx2d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2vx2d" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990299  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609457    1237 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990482  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609499    1237 reflector.go:138] object-"kube-system"/"kindnet-token-c5qt9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-c5qt9" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.998668  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:09 old-k8s-version-406673 kubelet[1237]: E0916 11:44:09.475464    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:45.998853  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:10 old-k8s-version-406673 kubelet[1237]: E0916 11:44:10.312296    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.002222  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:33 old-k8s-version-406673 kubelet[1237]: E0916 11:44:33.264025    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.002809  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:34 old-k8s-version-406673 kubelet[1237]: E0916 11:44:34.404862    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003157  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:35 old-k8s-version-406673 kubelet[1237]: E0916 11:44:35.407622    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003473  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:39 old-k8s-version-406673 kubelet[1237]: E0916 11:44:39.894316    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003636  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:44 old-k8s-version-406673 kubelet[1237]: E0916 11:44:44.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.004142  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:55 old-k8s-version-406673 kubelet[1237]: E0916 11:44:55.437796    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.005728  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:58 old-k8s-version-406673 kubelet[1237]: E0916 11:44:58.310102    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.005999  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:59 old-k8s-version-406673 kubelet[1237]: E0916 11:44:59.894304    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.006254  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.205817    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.006403  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.206178    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.006850  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:24 old-k8s-version-406673 kubelet[1237]: E0916 11:45:24.482238    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.007003  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:25 old-k8s-version-406673 kubelet[1237]: E0916 11:45:25.206099    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.007264  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:29 old-k8s-version-406673 kubelet[1237]: E0916 11:45:29.894364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.007412  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:37 old-k8s-version-406673 kubelet[1237]: E0916 11:45:37.206069    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.007693  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:42 old-k8s-version-406673 kubelet[1237]: E0916 11:45:42.205686    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.009255  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:51 old-k8s-version-406673 kubelet[1237]: E0916 11:45:51.269661    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.009575  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:56 old-k8s-version-406673 kubelet[1237]: E0916 11:45:56.206262    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.009750  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:03 old-k8s-version-406673 kubelet[1237]: E0916 11:46:03.206044    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.010204  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:11 old-k8s-version-406673 kubelet[1237]: E0916 11:46:11.550095    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.010352  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:18 old-k8s-version-406673 kubelet[1237]: E0916 11:46:18.206493    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.010606  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:19 old-k8s-version-406673 kubelet[1237]: E0916 11:46:19.894425    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.010860  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.205670    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.011011  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.206071    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011162  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:42 old-k8s-version-406673 kubelet[1237]: E0916 11:46:42.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011420  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:44 old-k8s-version-406673 kubelet[1237]: E0916 11:46:44.205741    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.011600  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:56 old-k8s-version-406673 kubelet[1237]: E0916 11:46:56.206336    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011878  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:58 old-k8s-version-406673 kubelet[1237]: E0916 11:46:58.207462    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.012463  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:11 old-k8s-version-406673 kubelet[1237]: E0916 11:47:11.206125    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.012710  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:12 old-k8s-version-406673 kubelet[1237]: E0916 11:47:12.205756    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.014264  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:22 old-k8s-version-406673 kubelet[1237]: E0916 11:47:22.276097    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.014506  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:24 old-k8s-version-406673 kubelet[1237]: E0916 11:47:24.205721    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.014752  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.206240    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.015120  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.670364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015359  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:39 old-k8s-version-406673 kubelet[1237]: E0916 11:47:39.894246    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015495  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:46 old-k8s-version-406673 kubelet[1237]: E0916 11:47:46.206145    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.015737  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:52 old-k8s-version-406673 kubelet[1237]: E0916 11:47:52.205673    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015874  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:57 old-k8s-version-406673 kubelet[1237]: E0916 11:47:57.206159    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016114  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:07 old-k8s-version-406673 kubelet[1237]: E0916 11:48:07.205557    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.016247  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:08 old-k8s-version-406673 kubelet[1237]: E0916 11:48:08.206452    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016379  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:19 old-k8s-version-406673 kubelet[1237]: E0916 11:48:19.206101    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016615  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: E0916 11:48:22.205857    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.016751  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016990  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017226  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017396  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.017637  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017975  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.018314  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.018532  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.018808  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.018949  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.019188  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.019329  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:46.019344  342599 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:49:46.019362  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:49:46.120596  342599 logs.go:123] Gathering logs for kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] ...
	I0916 11:49:46.120625  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:46.155238  342599 logs.go:123] Gathering logs for kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] ...
	I0916 11:49:46.155276  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:46.196278  342599 logs.go:123] Gathering logs for kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] ...
	I0916 11:49:46.196315  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:46.231618  342599 logs.go:123] Gathering logs for dmesg ...
	I0916 11:49:46.231644  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:49:46.251725  342599 logs.go:123] Gathering logs for coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] ...
	I0916 11:49:46.251757  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:46.285166  342599 logs.go:123] Gathering logs for kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] ...
	I0916 11:49:46.285195  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:46.322323  342599 logs.go:123] Gathering logs for kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] ...
	I0916 11:49:46.322354  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:46.383562  342599 logs.go:123] Gathering logs for storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] ...
	I0916 11:49:46.383598  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:46.419476  342599 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:49:46.419504  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:49:46.486057  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:46.486091  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:49:46.486149  342599 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0916 11:49:46.486160  342599 out.go:270]   Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.486167  342599 out.go:270]   Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	  Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.486178  342599 out.go:270]   Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.486186  342599 out.go:270]   Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	  Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.486192  342599 out.go:270]   Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:46.486197  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:46.486202  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:49:56.486729  342599 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:49:56.492442  342599 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:49:56.494563  342599 out.go:201] 
	W0916 11:49:56.495936  342599 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 11:49:56.495972  342599 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 11:49:56.495996  342599 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 11:49:56.496004  342599 out.go:270] * 
	* 
	W0916 11:49:56.496790  342599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 11:49:56.498603  342599 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-406673 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-406673
helpers_test.go:235: (dbg) docker inspect old-k8s-version-406673:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b",
	        "Created": "2024-09-16T11:41:15.966557614Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:43:41.856340647Z",
	            "FinishedAt": "2024-09-16T11:43:40.973630206Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hosts",
	        "LogPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b-json.log",
	        "Name": "/old-k8s-version-406673",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-406673:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-406673",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-406673",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-406673/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-406673",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56f8a2f3575a0a3b313b56d9db518474b2f321b76a887e55e0a93f6b40f9cac8",
	            "SandboxKey": "/var/run/docker/netns/56f8a2f3575a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-406673": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49cf3e3468396ba01b588ae85b5e7bcdf3e6dcfeb05d207136018542ad1d54df",
	                    "EndpointID": "09f66bc7471fc8394e9becd78bcf298cb4869abecddacfbc6a06bf8255a6855b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-406673",
	                        "28d6c5fc26a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25: (1.229564555s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC |                     |
	|         | sudo systemctl status docker                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat docker                              |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo cat                                               |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo docker system info                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                  |                           |         |         |                     |                     |
	|         | cri-docker --all --full                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat cri-docker                          |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                  |                           |         |         |                     |                     |
	|         | containerd --all --full                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                          |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service                 |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                               |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                             |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                            |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                           |         |         |                     |                     |
	|         | \;                                                     |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467     | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:43:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:43:41.448675  342599 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:43:41.449069  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:43:41.449083  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:43:41.449090  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:43:41.449520  342599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:43:41.450534  342599 out.go:352] Setting JSON to false
	I0916 11:43:41.451659  342599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5161,"bootTime":1726481860,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:43:41.451763  342599 start.go:139] virtualization: kvm guest
	I0916 11:43:41.454105  342599 out.go:177] * [old-k8s-version-406673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:43:41.455638  342599 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:43:41.455671  342599 notify.go:220] Checking for updates...
	I0916 11:43:41.458330  342599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:43:41.459636  342599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:41.460924  342599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:43:41.462503  342599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:43:41.464018  342599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:43:41.466022  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:41.468148  342599 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 11:43:41.469509  342599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:43:41.493994  342599 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:43:41.494082  342599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:43:41.552267  342599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:43:41.542033993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:43:41.552366  342599 docker.go:318] overlay module found
	I0916 11:43:41.554456  342599 out.go:177] * Using the docker driver based on existing profile
	I0916 11:43:41.555523  342599 start.go:297] selected driver: docker
	I0916 11:43:41.555540  342599 start.go:901] validating driver "docker" against &{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:41.555622  342599 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:43:41.556394  342599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:43:41.611358  342599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:43:41.600217835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:43:41.611712  342599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:43:41.611741  342599 cni.go:84] Creating CNI manager for ""
	I0916 11:43:41.611767  342599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:43:41.611800  342599 start.go:340] cluster config:
	{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:41.614659  342599 out.go:177] * Starting "old-k8s-version-406673" primary control-plane node in "old-k8s-version-406673" cluster
	I0916 11:43:41.616047  342599 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:43:41.617540  342599 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:43:41.619066  342599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:43:41.619093  342599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:43:41.619118  342599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 11:43:41.619138  342599 cache.go:56] Caching tarball of preloaded images
	I0916 11:43:41.619235  342599 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:43:41.619248  342599 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 11:43:41.619349  342599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	W0916 11:43:41.640867  342599 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:43:41.640901  342599 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:43:41.641001  342599 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:43:41.641018  342599 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:43:41.641022  342599 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:43:41.641030  342599 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:43:41.641034  342599 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:43:41.718830  342599 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:43:41.718879  342599 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:43:41.718924  342599 start.go:360] acquireMachinesLock for old-k8s-version-406673: {Name:mk8e16c995170a3c051ae96503b85729d385d06f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:43:41.719008  342599 start.go:364] duration metric: took 59.119µs to acquireMachinesLock for "old-k8s-version-406673"
	I0916 11:43:41.719031  342599 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:43:41.719049  342599 fix.go:54] fixHost starting: 
	I0916 11:43:41.719280  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:41.737386  342599 fix.go:112] recreateIfNeeded on old-k8s-version-406673: state=Stopped err=<nil>
	W0916 11:43:41.737478  342599 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:43:41.739550  342599 out.go:177] * Restarting existing docker container for "old-k8s-version-406673" ...
	I0916 11:43:41.740931  342599 cli_runner.go:164] Run: docker start old-k8s-version-406673
	I0916 11:43:42.037870  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:42.057638  342599 kic.go:430] container "old-k8s-version-406673" state is running.
	I0916 11:43:42.058125  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:42.077127  342599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:43:42.077438  342599 machine.go:93] provisionDockerMachine start ...
	I0916 11:43:42.077513  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:42.096731  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:42.096978  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:42.096997  342599 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:43:42.097660  342599 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48048->127.0.0.1:33093: read: connection reset by peer
	I0916 11:43:45.232865  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:43:45.232896  342599 ubuntu.go:169] provisioning hostname "old-k8s-version-406673"
	I0916 11:43:45.232959  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.254903  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.255229  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.255258  342599 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-406673 && echo "old-k8s-version-406673" | sudo tee /etc/hostname
	I0916 11:43:45.401461  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:43:45.401545  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.419533  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.419740  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.419760  342599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-406673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-406673/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-406673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:43:45.557487  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:43:45.557514  342599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:43:45.557560  342599 ubuntu.go:177] setting up certificates
	I0916 11:43:45.557573  342599 provision.go:84] configureAuth start
	I0916 11:43:45.557627  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:45.574760  342599 provision.go:143] copyHostCerts
	I0916 11:43:45.574844  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:43:45.574860  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:43:45.574945  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:43:45.575091  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:43:45.575105  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:43:45.575153  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:43:45.575244  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:43:45.575255  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:43:45.575295  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:43:45.575376  342599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-406673 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-406673]
	I0916 11:43:45.748283  342599 provision.go:177] copyRemoteCerts
	I0916 11:43:45.748356  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:43:45.748393  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.765636  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:45.862269  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:43:45.885003  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:43:45.907169  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:43:45.931358  342599 provision.go:87] duration metric: took 373.76893ms to configureAuth
	I0916 11:43:45.931402  342599 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:43:45.931619  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:45.931737  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.950090  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.950326  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.950350  342599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:43:46.250285  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:43:46.250314  342599 machine.go:96] duration metric: took 4.172856931s to provisionDockerMachine
	I0916 11:43:46.250329  342599 start.go:293] postStartSetup for "old-k8s-version-406673" (driver="docker")
	I0916 11:43:46.250342  342599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:43:46.250412  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:43:46.250460  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.269457  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.370592  342599 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:43:46.373854  342599 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:43:46.373887  342599 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:43:46.373895  342599 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:43:46.373901  342599 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:43:46.373912  342599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:43:46.373966  342599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:43:46.374049  342599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:43:46.374134  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:43:46.382190  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:43:46.404854  342599 start.go:296] duration metric: took 154.508203ms for postStartSetup
	I0916 11:43:46.404944  342599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:43:46.404984  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.423369  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.518250  342599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:43:46.522658  342599 fix.go:56] duration metric: took 4.803604453s for fixHost
	I0916 11:43:46.522684  342599 start.go:83] releasing machines lock for "old-k8s-version-406673", held for 4.803664456s
	I0916 11:43:46.522755  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:46.540413  342599 ssh_runner.go:195] Run: cat /version.json
	I0916 11:43:46.540463  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.540483  342599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:43:46.540550  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.559326  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.559343  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.649310  342599 ssh_runner.go:195] Run: systemctl --version
	I0916 11:43:46.731311  342599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:43:46.869148  342599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:43:46.873764  342599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:43:46.882554  342599 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:43:46.882626  342599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:43:46.891468  342599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:43:46.891491  342599 start.go:495] detecting cgroup driver to use...
	I0916 11:43:46.891523  342599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:43:46.891589  342599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:43:46.903563  342599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:43:46.914685  342599 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:43:46.914743  342599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:43:46.927471  342599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:43:46.938829  342599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:43:47.019225  342599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:43:47.095917  342599 docker.go:233] disabling docker service ...
	I0916 11:43:47.095984  342599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:43:47.108451  342599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:43:47.119842  342599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:43:47.196356  342599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:43:47.275282  342599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:43:47.286402  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:43:47.301909  342599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 11:43:47.301978  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.311648  342599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:43:47.311699  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.321003  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.330113  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.339110  342599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:43:47.348230  342599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:43:47.356509  342599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:43:47.364678  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:47.441764  342599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:43:47.538547  342599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:43:47.538607  342599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:43:47.542039  342599 start.go:563] Will wait 60s for crictl version
	I0916 11:43:47.542091  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:47.545302  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:43:47.578706  342599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:43:47.578785  342599 ssh_runner.go:195] Run: crio --version
	I0916 11:43:47.613962  342599 ssh_runner.go:195] Run: crio --version
	I0916 11:43:47.653182  342599 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0916 11:43:47.654482  342599 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:43:47.672357  342599 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:43:47.676229  342599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:43:47.687076  342599 kubeadm.go:883] updating cluster {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:43:47.687218  342599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:43:47.687280  342599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:43:47.727184  342599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:43:47.727258  342599 ssh_runner.go:195] Run: which lz4
	I0916 11:43:47.730999  342599 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:43:47.734265  342599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:43:47.734295  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 11:43:48.663263  342599 crio.go:462] duration metric: took 932.291429ms to copy over tarball
	I0916 11:43:48.663330  342599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:43:51.176610  342599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513253657s)
	I0916 11:43:51.176636  342599 crio.go:469] duration metric: took 2.513345828s to extract the tarball
	I0916 11:43:51.176643  342599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:43:51.248591  342599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:43:51.284423  342599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:43:51.284455  342599 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:43:51.284517  342599 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:51.284558  342599 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.284565  342599 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.284571  342599 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.284544  342599 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.284593  342599 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:43:51.284623  342599 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.284686  342599 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.285864  342599 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.285942  342599 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.285948  342599 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.285942  342599 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.285946  342599 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.286009  342599 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:43:51.286019  342599 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.286049  342599 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:51.492242  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.522713  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 11:43:51.534975  342599 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:43:51.535071  342599 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.535150  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.544750  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.545678  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.559215  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.568350  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.570259  342599 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:43:51.570308  342599 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:43:51.570346  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.570365  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.573562  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.622238  342599 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:43:51.622290  342599 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.622339  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.623682  342599 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:43:51.623772  342599 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.623841  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.757921  342599 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:43:51.757942  342599 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:43:51.757968  342599 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.757968  342599 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.758009  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758009  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758101  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.758165  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:51.758219  342599 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:43:51.758251  342599 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.758269  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.758285  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758367  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.819059  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:51.819128  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.819135  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.819062  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.819186  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.819225  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.819239  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:52.005990  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:52.007566  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:43:52.012996  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:52.013008  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:52.013082  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:52.013133  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:52.013213  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:52.113680  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:52.201435  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:52.201538  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:43:52.206771  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:43:52.208106  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:43:52.208187  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:52.225734  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:43:52.299412  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:43:52.299468  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:43:52.378199  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:52.517048  342599 cache_images.go:92] duration metric: took 1.232574481s to LoadCachedImages
	W0916 11:43:52.517148  342599 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0916 11:43:52.517167  342599 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 crio true true} ...
	I0916 11:43:52.517302  342599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-406673 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:43:52.517418  342599 ssh_runner.go:195] Run: crio config
	I0916 11:43:52.561512  342599 cni.go:84] Creating CNI manager for ""
	I0916 11:43:52.561534  342599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:43:52.561543  342599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:43:52.561561  342599 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-406673 NodeName:old-k8s-version-406673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:43:52.561689  342599 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-406673"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:43:52.561758  342599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:43:52.570704  342599 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:43:52.570772  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:43:52.579313  342599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0916 11:43:52.596268  342599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:43:52.612866  342599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0916 11:43:52.629581  342599 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:43:52.632853  342599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:43:52.643379  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:52.720660  342599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:43:52.734195  342599 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673 for IP: 192.168.103.2
	I0916 11:43:52.734216  342599 certs.go:194] generating shared ca certs ...
	I0916 11:43:52.734231  342599 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:52.734355  342599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:43:52.734391  342599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:43:52.734402  342599 certs.go:256] generating profile certs ...
	I0916 11:43:52.734473  342599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key
	I0916 11:43:52.734530  342599 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db
	I0916 11:43:52.734564  342599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key
	I0916 11:43:52.734710  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:43:52.734744  342599 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:43:52.734754  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:43:52.734773  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:43:52.734795  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:43:52.734814  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:43:52.734850  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:43:52.735413  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:43:52.758887  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:43:52.782936  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:43:52.810335  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:43:52.835181  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:43:52.858252  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:43:52.880337  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:43:52.903907  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:43:52.927676  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:43:52.950944  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:43:52.974697  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:43:52.997934  342599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:43:53.016161  342599 ssh_runner.go:195] Run: openssl version
	I0916 11:43:53.021716  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:43:53.032092  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.035726  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.035794  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.042425  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:43:53.050857  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:43:53.059886  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.063252  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.063300  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.069514  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:43:53.078142  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:43:53.087290  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.090824  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.090896  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.097688  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:43:53.106525  342599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:43:53.109881  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:43:53.116612  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:43:53.123543  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:43:53.130272  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:43:53.136649  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:43:53.143689  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:43:53.151260  342599 kubeadm.go:392] StartCluster: {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:53.151380  342599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:43:53.151472  342599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:43:53.185768  342599 cri.go:89] found id: ""
	I0916 11:43:53.185846  342599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:43:53.194666  342599 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:43:53.194693  342599 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:43:53.194743  342599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:43:53.203055  342599 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:43:53.203881  342599 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-406673" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:53.204510  342599 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-406673" cluster setting kubeconfig missing "old-k8s-version-406673" context setting]
	I0916 11:43:53.205412  342599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.206930  342599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:43:53.215880  342599 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 11:43:53.215923  342599 kubeadm.go:597] duration metric: took 21.223045ms to restartPrimaryControlPlane
	I0916 11:43:53.215932  342599 kubeadm.go:394] duration metric: took 64.683125ms to StartCluster
	I0916 11:43:53.215949  342599 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.216018  342599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:53.218206  342599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.218661  342599 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:43:53.219512  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:53.219410  342599 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:43:53.219686  342599 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-406673"
	I0916 11:43:53.219705  342599 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-406673"
	W0916 11:43:53.219717  342599 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:43:53.219747  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.219785  342599 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-406673"
	I0916 11:43:53.219883  342599 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-406673"
	I0916 11:43:53.219823  342599 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-406673"
	I0916 11:43:53.220280  342599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-406673"
	I0916 11:43:53.219834  342599 addons.go:69] Setting dashboard=true in profile "old-k8s-version-406673"
	I0916 11:43:53.220375  342599 addons.go:234] Setting addon dashboard=true in "old-k8s-version-406673"
	W0916 11:43:53.220386  342599 addons.go:243] addon dashboard should already be in state true
	I0916 11:43:53.220422  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	W0916 11:43:53.220260  342599 addons.go:243] addon metrics-server should already be in state true
	I0916 11:43:53.220488  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.220653  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220710  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220869  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220926  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.221032  342599 out.go:177] * Verifying Kubernetes components...
	I0916 11:43:53.222752  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:53.244346  342599 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-406673"
	W0916 11:43:53.244373  342599 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:43:53.244398  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.244751  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.245037  342599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:43:53.246474  342599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:53.246481  342599 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:43:53.248096  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:43:53.248127  342599 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:43:53.248185  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.248192  342599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.248201  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:43:53.248098  342599 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:43:53.248252  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.250338  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:43:53.250359  342599 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:43:53.250404  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.273873  342599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:53.273898  342599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:43:53.273955  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.274169  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.275302  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.280036  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.301411  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.328656  342599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:43:53.340523  342599 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:43:53.387478  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:43:53.387506  342599 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:43:53.387745  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.396664  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:43:53.396691  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:43:53.406440  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:43:53.406463  342599 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:43:53.407903  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:53.416422  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:43:53.416449  342599 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:43:53.427712  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:43:53.427740  342599 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:43:53.439315  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:53.439342  342599 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:43:53.503707  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:43:53.503732  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 11:43:53.510579  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:53.525664  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:43:53.525696  342599 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:43:53.525914  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.525944  342599 retry.go:31] will retry after 152.87848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.532836  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.532872  342599 retry.go:31] will retry after 157.07542ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.601969  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:43:53.601994  342599 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:43:53.621346  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:43:53.621373  342599 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 11:43:53.634937  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.634974  342599 retry.go:31] will retry after 321.390454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.639540  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:43:53.639567  342599 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:43:53.656867  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:53.656893  342599 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:43:53.673744  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:53.679888  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.691095  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:53.745183  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.745217  342599 retry.go:31] will retry after 136.130565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.796348  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.796382  342599 retry.go:31] will retry after 443.518837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.810771  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.810811  342599 retry.go:31] will retry after 382.546252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.881722  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:53.941956  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.941994  342599 retry.go:31] will retry after 236.364167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.957151  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:43:54.015814  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.015853  342599 retry.go:31] will retry after 375.113173ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.179194  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:54.193519  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:54.240911  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:54.252866  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.252918  342599 retry.go:31] will retry after 401.151273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.296437  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.296479  342599 retry.go:31] will retry after 764.07049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.333432  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.333478  342599 retry.go:31] will retry after 477.82927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.392081  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:43:54.451932  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.451973  342599 retry.go:31] will retry after 337.169739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.654238  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:54.712692  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.712728  342599 retry.go:31] will retry after 935.95517ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.789893  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:54.812303  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:54.852000  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.852037  342599 retry.go:31] will retry after 1.132792971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.874248  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.874282  342599 retry.go:31] will retry after 1.153231222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.061616  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:55.118580  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.118616  342599 retry.go:31] will retry after 952.42092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.341220  342599 node_ready.go:53] error getting node "old-k8s-version-406673": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-406673": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:43:55.649816  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:55.707503  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.707546  342599 retry.go:31] will retry after 1.525466419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.985469  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:56.027729  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:56.048118  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.048158  342599 retry.go:31] will retry after 1.537917974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.071232  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:56.087643  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.087676  342599 retry.go:31] will retry after 1.497738328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:56.130041  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.130083  342599 retry.go:31] will retry after 1.703517602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.233406  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:57.294430  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.294464  342599 retry.go:31] will retry after 1.40258396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.342100  342599 node_ready.go:53] error getting node "old-k8s-version-406673": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-406673": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:43:57.586456  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:57.586462  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:57.646094  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.646123  342599 retry.go:31] will retry after 1.833576806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:57.646162  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.646188  342599 retry.go:31] will retry after 2.656765994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.834560  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:57.892906  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.892939  342599 retry.go:31] will retry after 2.18125411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:58.698022  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:58.758259  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:58.758297  342599 retry.go:31] will retry after 1.653760659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:59.480055  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:44:00.074833  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:44:00.303327  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:44:00.413145  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:44:03.516026  342599 node_ready.go:49] node "old-k8s-version-406673" has status "Ready":"True"
	I0916 11:44:03.516063  342599 node_ready.go:38] duration metric: took 10.17550256s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:44:03.516076  342599 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:44:03.717989  342599 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.816595  342599 pod_ready.go:93] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:03.816691  342599 pod_ready.go:82] duration metric: took 98.666189ms for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.816719  342599 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.918233  342599 pod_ready.go:93] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:03.918276  342599 pod_ready.go:82] duration metric: took 101.538159ms for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.918295  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:04.620945  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.140838756s)
	I0916 11:44:04.621040  342599 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-406673"
	I0916 11:44:04.621047  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.317689547s)
	I0916 11:44:04.620999  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.546120779s)
	I0916 11:44:04.898187  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.484990296s)
	I0916 11:44:04.900305  342599 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-406673 addons enable metrics-server
	
	I0916 11:44:04.901863  342599 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0916 11:44:04.903406  342599 addons.go:510] duration metric: took 11.683989587s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0916 11:44:05.923500  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:07.924626  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:09.926452  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:12.424233  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:14.924223  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:15.423797  342599 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:15.423828  342599 pod_ready.go:82] duration metric: took 11.505525488s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:15.423838  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:17.429733  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:19.430224  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:21.929713  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:24.009627  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:26.430326  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:28.430726  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:30.930780  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:33.433263  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:35.929752  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:37.929837  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:39.930279  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:41.930540  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:43.930791  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:46.429510  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:48.430161  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:50.430295  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:52.929990  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:54.930580  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:57.429547  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:59.430191  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:01.930680  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:04.430050  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:06.431610  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:08.929699  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:11.430903  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:13.929747  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:15.931063  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:18.430474  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:20.929144  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:22.929833  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:24.930407  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:26.931153  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:29.430186  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:31.929901  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:34.430487  342599 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.430512  342599 pod_ready.go:82] duration metric: took 1m19.006667807s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.430523  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.435258  342599 pod_ready.go:93] pod "kube-proxy-pcbvp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.435281  342599 pod_ready.go:82] duration metric: took 4.751917ms for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.435290  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.439468  342599 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.439490  342599 pod_ready.go:82] duration metric: took 4.192562ms for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.439505  342599 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:36.445827  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:38.946013  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:41.444852  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:43.445737  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:45.946748  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:48.445118  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:50.445210  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:52.445816  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:54.446068  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:56.945501  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:58.945685  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:01.445377  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:03.445752  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:05.945806  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:08.446010  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:10.446073  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:12.945844  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:15.446131  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:17.946289  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:20.445864  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:22.445951  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:24.946488  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:27.445839  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:29.945436  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:31.945951  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:33.947646  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:36.445905  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:38.948094  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:41.446271  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:43.978003  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:46.445688  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:48.946292  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:51.445713  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:53.945072  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:55.945739  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:58.445191  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:00.445680  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:02.446254  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:04.946036  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:07.447667  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:09.945983  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:12.445228  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:14.445689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:16.445931  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:18.945281  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:20.945433  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:22.946291  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:25.444655  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:27.445696  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:29.445774  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:31.945999  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:34.444676  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:36.445444  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:38.945689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:41.446060  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:43.948656  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:46.445159  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:48.946051  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:51.446010  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:53.446145  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:55.945438  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:57.945706  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:59.946103  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:02.445233  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:04.945988  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:07.445200  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:09.446085  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:11.944825  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:13.945689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:16.444784  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:18.444860  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:20.445186  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:22.447125  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:24.945528  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:26.945691  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:29.446345  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:31.945589  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:34.444967  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:36.445485  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:38.945937  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:41.445492  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:43.445794  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:45.945563  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:48.445313  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:50.946012  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:53.445570  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:55.947554  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:58.445126  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:00.945813  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:02.946300  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:05.445265  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:07.446242  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:09.946173  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:12.446147  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:14.945283  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:17.447088  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:19.945240  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:21.945474  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:24.445814  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:26.945457  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:29.445643  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:31.945681  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:34.445158  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:34.445185  342599 pod_ready.go:82] duration metric: took 4m0.005672608s for pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace to be "Ready" ...
	E0916 11:49:34.445196  342599 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:49:34.445205  342599 pod_ready.go:39] duration metric: took 5m30.929118215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:49:34.445222  342599 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:49:34.445252  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:49:34.445299  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:49:34.479712  342599 cri.go:89] found id: "f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:34.479738  342599 cri.go:89] found id: ""
	I0916 11:49:34.479748  342599 logs.go:276] 1 containers: [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d]
	I0916 11:49:34.479800  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.483247  342599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:49:34.483318  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:49:34.517155  342599 cri.go:89] found id: "7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:34.517180  342599 cri.go:89] found id: ""
	I0916 11:49:34.517188  342599 logs.go:276] 1 containers: [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6]
	I0916 11:49:34.517247  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.520774  342599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:49:34.520856  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:49:34.554354  342599 cri.go:89] found id: "97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:34.554377  342599 cri.go:89] found id: ""
	I0916 11:49:34.554387  342599 logs.go:276] 1 containers: [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84]
	I0916 11:49:34.554452  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.557960  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:49:34.558017  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:49:34.594211  342599 cri.go:89] found id: "0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:34.594233  342599 cri.go:89] found id: ""
	I0916 11:49:34.594241  342599 logs.go:276] 1 containers: [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f]
	I0916 11:49:34.594291  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.597717  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:49:34.597782  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:49:34.631348  342599 cri.go:89] found id: "5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:34.631372  342599 cri.go:89] found id: ""
	I0916 11:49:34.631382  342599 logs.go:276] 1 containers: [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849]
	I0916 11:49:34.631438  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.634962  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:49:34.635076  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:49:34.668370  342599 cri.go:89] found id: "b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:34.668392  342599 cri.go:89] found id: ""
	I0916 11:49:34.668401  342599 logs.go:276] 1 containers: [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19]
	I0916 11:49:34.668456  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.671903  342599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:49:34.671964  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:49:34.707573  342599 cri.go:89] found id: "368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:34.707601  342599 cri.go:89] found id: ""
	I0916 11:49:34.707611  342599 logs.go:276] 1 containers: [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4]
	I0916 11:49:34.707658  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.711089  342599 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:49:34.711146  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:49:34.746008  342599 cri.go:89] found id: "97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:34.746034  342599 cri.go:89] found id: ""
	I0916 11:49:34.746041  342599 logs.go:276] 1 containers: [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf]
	I0916 11:49:34.746091  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.749832  342599 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:49:34.749936  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:49:34.782428  342599 cri.go:89] found id: "5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:34.782453  342599 cri.go:89] found id: ""
	I0916 11:49:34.782462  342599 logs.go:276] 1 containers: [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd]
	I0916 11:49:34.782512  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.786501  342599 logs.go:123] Gathering logs for dmesg ...
	I0916 11:49:34.786532  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:49:34.807221  342599 logs.go:123] Gathering logs for kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] ...
	I0916 11:49:34.807251  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:34.843519  342599 logs.go:123] Gathering logs for kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] ...
	I0916 11:49:34.843550  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:34.904038  342599 logs.go:123] Gathering logs for kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] ...
	I0916 11:49:34.904072  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:34.938520  342599 logs.go:123] Gathering logs for kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] ...
	I0916 11:49:34.938549  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:34.976046  342599 logs.go:123] Gathering logs for storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] ...
	I0916 11:49:34.976077  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:35.011710  342599 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:49:35.011741  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:49:35.076295  342599 logs.go:123] Gathering logs for kubelet ...
	I0916 11:49:35.076330  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:49:35.115636  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.608970    1237 reflector.go:138] object-"kube-system"/"storage-provisioner-token-767ft": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-767ft" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.115819  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609271    1237 reflector.go:138] object-"kube-system"/"coredns-token-75kvx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-75kvx" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116040  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609320    1237 reflector.go:138] object-"kube-system"/"metrics-server-token-2vx2d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2vx2d" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116205  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609457    1237 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116360  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609499    1237 reflector.go:138] object-"kube-system"/"kindnet-token-c5qt9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-c5qt9" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.123705  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:09 old-k8s-version-406673 kubelet[1237]: E0916 11:44:09.475464    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.123850  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:10 old-k8s-version-406673 kubelet[1237]: E0916 11:44:10.312296    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.126870  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:33 old-k8s-version-406673 kubelet[1237]: E0916 11:44:33.264025    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.127338  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:34 old-k8s-version-406673 kubelet[1237]: E0916 11:44:34.404862    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127612  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:35 old-k8s-version-406673 kubelet[1237]: E0916 11:44:35.407622    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127855  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:39 old-k8s-version-406673 kubelet[1237]: E0916 11:44:39.894316    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127989  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:44 old-k8s-version-406673 kubelet[1237]: E0916 11:44:44.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.128412  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:55 old-k8s-version-406673 kubelet[1237]: E0916 11:44:55.437796    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.129943  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:58 old-k8s-version-406673 kubelet[1237]: E0916 11:44:58.310102    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.130185  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:59 old-k8s-version-406673 kubelet[1237]: E0916 11:44:59.894304    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.130422  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.205817    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.130556  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.206178    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.130984  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:24 old-k8s-version-406673 kubelet[1237]: E0916 11:45:24.482238    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.131116  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:25 old-k8s-version-406673 kubelet[1237]: E0916 11:45:25.206099    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.131360  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:29 old-k8s-version-406673 kubelet[1237]: E0916 11:45:29.894364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.131500  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:37 old-k8s-version-406673 kubelet[1237]: E0916 11:45:37.206069    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.131753  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:42 old-k8s-version-406673 kubelet[1237]: E0916 11:45:42.205686    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.133205  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:51 old-k8s-version-406673 kubelet[1237]: E0916 11:45:51.269661    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.133479  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:56 old-k8s-version-406673 kubelet[1237]: E0916 11:45:56.206262    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.133630  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:03 old-k8s-version-406673 kubelet[1237]: E0916 11:46:03.206044    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134062  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:11 old-k8s-version-406673 kubelet[1237]: E0916 11:46:11.550095    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134197  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:18 old-k8s-version-406673 kubelet[1237]: E0916 11:46:18.206493    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134434  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:19 old-k8s-version-406673 kubelet[1237]: E0916 11:46:19.894425    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134687  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.205670    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134821  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.206071    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134958  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:42 old-k8s-version-406673 kubelet[1237]: E0916 11:46:42.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.135210  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:44 old-k8s-version-406673 kubelet[1237]: E0916 11:46:44.205741    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.135344  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:56 old-k8s-version-406673 kubelet[1237]: E0916 11:46:56.206336    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.135580  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:58 old-k8s-version-406673 kubelet[1237]: E0916 11:46:58.207462    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.136101  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:11 old-k8s-version-406673 kubelet[1237]: E0916 11:47:11.206125    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.136340  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:12 old-k8s-version-406673 kubelet[1237]: E0916 11:47:12.205756    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.137850  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:22 old-k8s-version-406673 kubelet[1237]: E0916 11:47:22.276097    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.138089  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:24 old-k8s-version-406673 kubelet[1237]: E0916 11:47:24.205721    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.138317  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.206240    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.138647  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.670364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.138886  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:39 old-k8s-version-406673 kubelet[1237]: E0916 11:47:39.894246    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139020  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:46 old-k8s-version-406673 kubelet[1237]: E0916 11:47:46.206145    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139257  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:52 old-k8s-version-406673 kubelet[1237]: E0916 11:47:52.205673    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139390  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:57 old-k8s-version-406673 kubelet[1237]: E0916 11:47:57.206159    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139625  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:07 old-k8s-version-406673 kubelet[1237]: E0916 11:48:07.205557    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139761  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:08 old-k8s-version-406673 kubelet[1237]: E0916 11:48:08.206452    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139894  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:19 old-k8s-version-406673 kubelet[1237]: E0916 11:48:19.206101    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.140128  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: E0916 11:48:22.205857    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140267  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.140523  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140778  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140950  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.141221  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.141578  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.141892  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.142029  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.142265  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.142397  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:35.142408  342599 logs.go:123] Gathering logs for etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] ...
	I0916 11:49:35.142422  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:35.182113  342599 logs.go:123] Gathering logs for kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] ...
	I0916 11:49:35.182144  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:35.223823  342599 logs.go:123] Gathering logs for container status ...
	I0916 11:49:35.223856  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:49:35.262634  342599 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:49:35.262663  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:49:35.367246  342599 logs.go:123] Gathering logs for coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] ...
	I0916 11:49:35.367278  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:35.402793  342599 logs.go:123] Gathering logs for kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] ...
	I0916 11:49:35.402829  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:35.462604  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:35.462635  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:49:35.462715  342599 out.go:270] X Problems detected in kubelet:
	W0916 11:49:35.462728  342599 out.go:270]   Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.462739  342599 out.go:270]   Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.462755  342599 out.go:270]   Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.462770  342599 out.go:270]   Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.462780  342599 out.go:270]   Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:35.462788  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:35.462799  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:49:45.464328  342599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:49:45.476151  342599 api_server.go:72] duration metric: took 5m52.257437357s to wait for apiserver process to appear ...
	I0916 11:49:45.476182  342599 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:49:45.476243  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:49:45.476303  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:49:45.512448  342599 cri.go:89] found id: "f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:45.512475  342599 cri.go:89] found id: ""
	I0916 11:49:45.512483  342599 logs.go:276] 1 containers: [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d]
	I0916 11:49:45.512531  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.516037  342599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:49:45.516112  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:49:45.549762  342599 cri.go:89] found id: "7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:45.549791  342599 cri.go:89] found id: ""
	I0916 11:49:45.549801  342599 logs.go:276] 1 containers: [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6]
	I0916 11:49:45.549848  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.553456  342599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:49:45.553520  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:49:45.587005  342599 cri.go:89] found id: "97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:45.587029  342599 cri.go:89] found id: ""
	I0916 11:49:45.587038  342599 logs.go:276] 1 containers: [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84]
	I0916 11:49:45.587095  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.590764  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:49:45.590840  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:49:45.623784  342599 cri.go:89] found id: "0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:45.623809  342599 cri.go:89] found id: ""
	I0916 11:49:45.623818  342599 logs.go:276] 1 containers: [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f]
	I0916 11:49:45.623891  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.627377  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:49:45.627428  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:49:45.660479  342599 cri.go:89] found id: "5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:45.660505  342599 cri.go:89] found id: ""
	I0916 11:49:45.660513  342599 logs.go:276] 1 containers: [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849]
	I0916 11:49:45.660575  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.664047  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:49:45.664102  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:49:45.699816  342599 cri.go:89] found id: "b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:45.699842  342599 cri.go:89] found id: ""
	I0916 11:49:45.699851  342599 logs.go:276] 1 containers: [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19]
	I0916 11:49:45.699906  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.703371  342599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:49:45.703425  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:49:45.736043  342599 cri.go:89] found id: "368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:45.736062  342599 cri.go:89] found id: ""
	I0916 11:49:45.736069  342599 logs.go:276] 1 containers: [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4]
	I0916 11:49:45.736110  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.739784  342599 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:49:45.739851  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:49:45.772325  342599 cri.go:89] found id: "5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:45.772352  342599 cri.go:89] found id: ""
	I0916 11:49:45.772362  342599 logs.go:276] 1 containers: [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd]
	I0916 11:49:45.772418  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.775808  342599 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:49:45.775861  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:49:45.809227  342599 cri.go:89] found id: "97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:45.809253  342599 cri.go:89] found id: ""
	I0916 11:49:45.809261  342599 logs.go:276] 1 containers: [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf]
	I0916 11:49:45.809321  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.812839  342599 logs.go:123] Gathering logs for etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] ...
	I0916 11:49:45.812865  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:45.851277  342599 logs.go:123] Gathering logs for kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] ...
	I0916 11:49:45.851308  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:45.909692  342599 logs.go:123] Gathering logs for container status ...
	I0916 11:49:45.909724  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:49:45.950858  342599 logs.go:123] Gathering logs for kubelet ...
	I0916 11:49:45.950886  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:49:45.989747  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.608970    1237 reflector.go:138] object-"kube-system"/"storage-provisioner-token-767ft": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-767ft" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.989949  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609271    1237 reflector.go:138] object-"kube-system"/"coredns-token-75kvx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-75kvx" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990131  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609320    1237 reflector.go:138] object-"kube-system"/"metrics-server-token-2vx2d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2vx2d" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990299  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609457    1237 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990482  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609499    1237 reflector.go:138] object-"kube-system"/"kindnet-token-c5qt9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-c5qt9" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.998668  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:09 old-k8s-version-406673 kubelet[1237]: E0916 11:44:09.475464    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:45.998853  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:10 old-k8s-version-406673 kubelet[1237]: E0916 11:44:10.312296    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.002222  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:33 old-k8s-version-406673 kubelet[1237]: E0916 11:44:33.264025    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.002809  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:34 old-k8s-version-406673 kubelet[1237]: E0916 11:44:34.404862    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003157  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:35 old-k8s-version-406673 kubelet[1237]: E0916 11:44:35.407622    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003473  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:39 old-k8s-version-406673 kubelet[1237]: E0916 11:44:39.894316    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003636  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:44 old-k8s-version-406673 kubelet[1237]: E0916 11:44:44.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.004142  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:55 old-k8s-version-406673 kubelet[1237]: E0916 11:44:55.437796    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.005728  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:58 old-k8s-version-406673 kubelet[1237]: E0916 11:44:58.310102    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.005999  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:59 old-k8s-version-406673 kubelet[1237]: E0916 11:44:59.894304    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.006254  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.205817    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.006403  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.206178    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.006850  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:24 old-k8s-version-406673 kubelet[1237]: E0916 11:45:24.482238    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.007003  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:25 old-k8s-version-406673 kubelet[1237]: E0916 11:45:25.206099    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.007264  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:29 old-k8s-version-406673 kubelet[1237]: E0916 11:45:29.894364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.007412  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:37 old-k8s-version-406673 kubelet[1237]: E0916 11:45:37.206069    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.007693  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:42 old-k8s-version-406673 kubelet[1237]: E0916 11:45:42.205686    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.009255  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:51 old-k8s-version-406673 kubelet[1237]: E0916 11:45:51.269661    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.009575  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:56 old-k8s-version-406673 kubelet[1237]: E0916 11:45:56.206262    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.009750  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:03 old-k8s-version-406673 kubelet[1237]: E0916 11:46:03.206044    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.010204  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:11 old-k8s-version-406673 kubelet[1237]: E0916 11:46:11.550095    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.010352  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:18 old-k8s-version-406673 kubelet[1237]: E0916 11:46:18.206493    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.010606  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:19 old-k8s-version-406673 kubelet[1237]: E0916 11:46:19.894425    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.010860  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.205670    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.011011  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.206071    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011162  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:42 old-k8s-version-406673 kubelet[1237]: E0916 11:46:42.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011420  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:44 old-k8s-version-406673 kubelet[1237]: E0916 11:46:44.205741    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.011600  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:56 old-k8s-version-406673 kubelet[1237]: E0916 11:46:56.206336    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011878  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:58 old-k8s-version-406673 kubelet[1237]: E0916 11:46:58.207462    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.012463  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:11 old-k8s-version-406673 kubelet[1237]: E0916 11:47:11.206125    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.012710  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:12 old-k8s-version-406673 kubelet[1237]: E0916 11:47:12.205756    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.014264  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:22 old-k8s-version-406673 kubelet[1237]: E0916 11:47:22.276097    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.014506  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:24 old-k8s-version-406673 kubelet[1237]: E0916 11:47:24.205721    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.014752  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.206240    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.015120  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.670364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015359  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:39 old-k8s-version-406673 kubelet[1237]: E0916 11:47:39.894246    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015495  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:46 old-k8s-version-406673 kubelet[1237]: E0916 11:47:46.206145    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.015737  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:52 old-k8s-version-406673 kubelet[1237]: E0916 11:47:52.205673    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015874  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:57 old-k8s-version-406673 kubelet[1237]: E0916 11:47:57.206159    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016114  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:07 old-k8s-version-406673 kubelet[1237]: E0916 11:48:07.205557    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.016247  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:08 old-k8s-version-406673 kubelet[1237]: E0916 11:48:08.206452    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016379  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:19 old-k8s-version-406673 kubelet[1237]: E0916 11:48:19.206101    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016615  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: E0916 11:48:22.205857    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.016751  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016990  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017226  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017396  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.017637  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017975  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.018314  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.018532  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.018808  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.018949  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.019188  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.019329  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:46.019344  342599 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:49:46.019362  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:49:46.120596  342599 logs.go:123] Gathering logs for kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] ...
	I0916 11:49:46.120625  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:46.155238  342599 logs.go:123] Gathering logs for kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] ...
	I0916 11:49:46.155276  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:46.196278  342599 logs.go:123] Gathering logs for kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] ...
	I0916 11:49:46.196315  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:46.231618  342599 logs.go:123] Gathering logs for dmesg ...
	I0916 11:49:46.231644  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:49:46.251725  342599 logs.go:123] Gathering logs for coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] ...
	I0916 11:49:46.251757  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:46.285166  342599 logs.go:123] Gathering logs for kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] ...
	I0916 11:49:46.285195  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:46.322323  342599 logs.go:123] Gathering logs for kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] ...
	I0916 11:49:46.322354  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:46.383562  342599 logs.go:123] Gathering logs for storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] ...
	I0916 11:49:46.383598  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:46.419476  342599 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:49:46.419504  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:49:46.486057  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:46.486091  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:49:46.486149  342599 out.go:270] X Problems detected in kubelet:
	W0916 11:49:46.486160  342599 out.go:270]   Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.486167  342599 out.go:270]   Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.486178  342599 out.go:270]   Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.486186  342599 out.go:270]   Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.486192  342599 out.go:270]   Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:46.486197  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:46.486202  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:49:56.486729  342599 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:49:56.492442  342599 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:49:56.494563  342599 out.go:201] 
	W0916 11:49:56.495936  342599 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 11:49:56.495972  342599 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 11:49:56.495996  342599 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 11:49:56.496004  342599 out.go:270] * 
	W0916 11:49:56.496790  342599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 11:49:56.498603  342599 out.go:201] 
	
	
	==> CRI-O <==
	Sep 16 11:47:35 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:35.684804232Z" level=info msg="Removed container 143ea0723dd7309adf34215d6a21f7c6322c510a3c558c9cedec9dbaaef6a581: kubernetes-dashboard/dashboard-metrics-scraper-8d5bb5db8-dxnqs/dashboard-metrics-scraper" id=6338338f-f4b7-4804-8489-2e0c191489d3 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Sep 16 11:47:46 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:46.205607604Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e46d1883-df3c-478c-9b52-43b1f4b66b53 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:47:46 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:46.205909613Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e46d1883-df3c-478c-9b52-43b1f4b66b53 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:47:57 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:57.205590762Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0ee50556-8e10-451a-9d07-2ae4c6c9996b name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:47:57 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:57.205872422Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0ee50556-8e10-451a-9d07-2ae4c6c9996b name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:08 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:08.205927770Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0fda9557-920b-416b-aed1-8828429a423a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:08 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:08.206213666Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0fda9557-920b-416b-aed1-8828429a423a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:19 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:19.205516445Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6b61ba83-a7fe-4777-9ceb-3deed76a75b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:19 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:19.205792802Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6b61ba83-a7fe-4777-9ceb-3deed76a75b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:30 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:30.205546758Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=225a3e9f-894f-4bed-9dc9-0f06dd504068 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:30 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:30.205816141Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=225a3e9f-894f-4bed-9dc9-0f06dd504068 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:45 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:45.205689456Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4594f766-f5f3-4768-8edb-2b6cacb02166 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:45 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:45.205912921Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4594f766-f5f3-4768-8edb-2b6cacb02166 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:58 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:58.167754509Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=5ce19d5a-f1e4-4a03-9328-16c1753b7070 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:58 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:58.168027574Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5ce19d5a-f1e4-4a03-9328-16c1753b7070 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:59 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:59.205587660Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=46e5b126-4bc5-4a49-8299-5156fb3752e8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:59 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:59.205837574Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=46e5b126-4bc5-4a49-8299-5156fb3752e8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:13 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:13.205588860Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ba709834-5cdd-4321-8fb8-da4780efcc01 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:13 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:13.205818502Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ba709834-5cdd-4321-8fb8-da4780efcc01 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:27 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:27.205589711Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6eea8117-c121-4212-9d87-2ba516071584 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:27 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:27.205844824Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6eea8117-c121-4212-9d87-2ba516071584 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:39 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:39.205649013Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1f2d6c84-3fdc-4849-9820-4b277fde2583 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:39 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:39.205947544Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1f2d6c84-3fdc-4849-9820-4b277fde2583 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:50 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:50.205575795Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a21fdcfb-e6e2-419b-97b4-b89ff6910578 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:50 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:50.205891848Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a21fdcfb-e6e2-419b-97b4-b89ff6910578 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	16a2fe5b8b22e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   00002de23c0ba       dashboard-metrics-scraper-8d5bb5db8-dxnqs
	97a484780a356       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   12e93458aef4e       kubernetes-dashboard-cd95d586-h95rv
	368f056913391       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b         5 minutes ago       Running             kindnet-cni                 0                   a2e083bcb0a1a       kindnet-mjcgf
	97fdc1e66b0e4       bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16                                           5 minutes ago       Running             coredns                     0                   7039bf8d6d58b       coredns-74ff55c5b-6xlgw
	5847ee074474b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           5 minutes ago       Running             storage-provisioner         0                   c68dec692823c       storage-provisioner
	5685724a36b6b       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc                                           5 minutes ago       Running             kube-proxy                  0                   9e4b127922197       kube-proxy-pcbvp
	b80ee304bde37       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080                                           5 minutes ago       Running             kube-controller-manager     0                   35ecfba1db612       kube-controller-manager-old-k8s-version-406673
	f6539ef58f9e0       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99                                           5 minutes ago       Running             kube-apiserver              0                   b270c3a332bfb       kube-apiserver-old-k8s-version-406673
	0516988d4d0e8       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899                                           5 minutes ago       Running             kube-scheduler              0                   e951ebd232405       kube-scheduler-old-k8s-version-406673
	7017b3108f0be       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                           5 minutes ago       Running             etcd                        0                   cea61840367ab       etcd-old-k8s-version-406673
	
	
	==> coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:38442 - 48402 "HINFO IN 8440324266966115617.7448481208015864567. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011622953s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:52772 - 30493 "HINFO IN 761927415616289072.1641658468185983910. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011782307s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-406673
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-406673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-406673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:41:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-406673
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:49:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:45:04 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:45:04 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:45:04 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:45:04 +0000   Mon, 16 Sep 2024 11:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-406673
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8ec375c2bd64b10897869c5d9453e9b
	  System UUID:                2d5bda39-09b0-43d0-95f9-1ff418499524
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-6xlgw                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m57s
	  kube-system                 etcd-old-k8s-version-406673                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m8s
	  kube-system                 kindnet-mjcgf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m57s
	  kube-system                 kube-apiserver-old-k8s-version-406673             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-controller-manager-old-k8s-version-406673    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-proxy-pcbvp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 kube-scheduler-old-k8s-version-406673             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 metrics-server-9975d5f86-zkwwm                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         6m23s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-dxnqs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-h95rv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m9s                   kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m9s                   kubelet     Node old-k8s-version-406673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m9s                   kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m56s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m29s                  kubelet     Node old-k8s-version-406673 status is now: NodeReady
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-406673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m53s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] <==
	2024-09-16 11:45:50.136507 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:00.136536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:10.136460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:20.136557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:30.136291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:40.136468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:50.136416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:00.136483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:10.136387 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:20.136454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:30.136397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:40.136445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:50.136438 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:00.136281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:10.136457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:20.136463 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:30.136398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:40.136622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:50.136499 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:00.136473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:10.136465 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:20.136455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:30.136556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:40.136366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:50.136454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:49:57 up  1:32,  0 users,  load average: 0.68, 0.64, 0.76
	Linux old-k8s-version-406673 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] <==
	I0916 11:47:49.894541       1 main.go:299] handling current node
	I0916 11:47:59.901427       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:47:59.901469       1 main.go:299] handling current node
	I0916 11:48:09.894606       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:09.894648       1 main.go:299] handling current node
	I0916 11:48:19.896699       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:19.896770       1 main.go:299] handling current node
	I0916 11:48:29.903313       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:29.903350       1 main.go:299] handling current node
	I0916 11:48:39.902037       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:39.902089       1 main.go:299] handling current node
	I0916 11:48:49.902224       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:49.902259       1 main.go:299] handling current node
	I0916 11:48:59.901463       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:59.901497       1 main.go:299] handling current node
	I0916 11:49:09.894629       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:09.894667       1 main.go:299] handling current node
	I0916 11:49:19.901410       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:19.901446       1 main.go:299] handling current node
	I0916 11:49:29.902764       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:29.902840       1 main.go:299] handling current node
	I0916 11:49:39.897630       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:39.897675       1 main.go:299] handling current node
	I0916 11:49:49.902771       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:49.902809       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] <==
	I0916 11:46:39.844735       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:46:39.844742       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:47:06.454676       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:47:06.454745       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:47:06.454753       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:47:19.048852       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:47:19.048892       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:47:19.048899       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:47:51.990441       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:47:51.990488       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:47:51.990496       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:48:26.148825       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:48:26.148872       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:48:26.148880       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:49:04.618784       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:49:04.618875       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:49:04.618884       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:49:07.795909       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:49:07.795951       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:49:07.795960       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:49:39.782779       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:49:39.782826       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:49:39.782835       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] <==
	W0916 11:45:30.205258       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:45:54.100705       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:46:01.855531       1 request.go:655] Throttling request took 1.048706818s, request: GET:https://192.168.103.2:8443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
	W0916 11:46:02.706861       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:46:24.600875       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:46:34.357365       1 request.go:655] Throttling request took 1.048487743s, request: GET:https://192.168.103.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0916 11:46:35.208330       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:46:55.102283       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:47:06.858592       1 request.go:655] Throttling request took 1.048648862s, request: GET:https://192.168.103.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0916 11:47:07.709483       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:47:25.603649       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:47:39.359703       1 request.go:655] Throttling request took 1.0487182s, request: GET:https://192.168.103.2:8443/apis/extensions/v1beta1?timeout=32s
	W0916 11:47:40.210574       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:47:56.105958       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:48:11.860709       1 request.go:655] Throttling request took 1.048808782s, request: GET:https://192.168.103.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0916 11:48:12.711817       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:48:26.607476       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:48:44.363089       1 request.go:655] Throttling request took 1.04855694s, request: GET:https://192.168.103.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0916 11:48:45.214378       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:48:57.109403       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:49:16.864750       1 request.go:655] Throttling request took 1.048355537s, request: GET:https://192.168.103.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0916 11:49:17.715986       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:49:27.611052       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:49:49.366266       1 request.go:655] Throttling request took 1.048549685s, request: GET:https://192.168.103.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0916 11:49:50.217118       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] <==
	I0916 11:42:00.995500       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:42:00.995590       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:42:01.010731       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:42:01.010826       1 server_others.go:185] Using iptables Proxier.
	I0916 11:42:01.012001       1 server.go:650] Version: v1.20.0
	I0916 11:42:01.013499       1 config.go:315] Starting service config controller
	I0916 11:42:01.013577       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:42:01.013592       1 config.go:224] Starting endpoint slice config controller
	I0916 11:42:01.013614       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:42:01.113797       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:42:01.113806       1 shared_informer.go:247] Caches are synced for service config 
	I0916 11:44:04.717629       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:44:04.717947       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:44:04.731733       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:44:04.731957       1 server_others.go:185] Using iptables Proxier.
	I0916 11:44:04.732297       1 server.go:650] Version: v1.20.0
	I0916 11:44:04.732738       1 config.go:315] Starting service config controller
	I0916 11:44:04.732748       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:44:04.795276       1 config.go:224] Starting endpoint slice config controller
	I0916 11:44:04.795305       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:44:04.833853       1 shared_informer.go:247] Caches are synced for service config 
	I0916 11:44:04.895480       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] <==
	E0916 11:41:40.593689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.593833       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:41:40.594045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:41:40.594338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:41:40.594501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:40.594699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:41:40.594858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:41:40.595116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:41:40.595261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:41:40.595399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:41:41.428933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:41.508045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.594591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.695406       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0916 11:41:44.916550       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0916 11:43:59.643776       1 serving.go:331] Generated self-signed cert in-memory
	W0916 11:44:03.607340       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:44:03.607469       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:44:03.607489       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:44:03.607496       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:44:03.800839       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:44:03.801676       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:44:03.803024       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:44:03.801704       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0916 11:44:03.903482       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:48:19 old-k8s-version-406673 kubelet[1237]: E0916 11:48:19.206101    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: I0916 11:48:22.205287    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: E0916 11:48:22.205857    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: I0916 11:48:33.205171    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: I0916 11:48:44.205439    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: I0916 11:48:56.205223    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:48:58 old-k8s-version-406673 kubelet[1237]: E0916 11:48:58.202657    1237 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b, memory: /docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/system.slice/kubelet.service
	Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: I0916 11:49:10.205202    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: I0916 11:49:21.205094    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: I0916 11:49:35.205223    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:50 old-k8s-version-406673 kubelet[1237]: I0916 11:49:50.205407    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:50 old-k8s-version-406673 kubelet[1237]: E0916 11:49:50.205809    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:50 old-k8s-version-406673 kubelet[1237]: E0916 11:49:50.206134    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] <==
	2024/09/16 11:44:28 Starting overwatch
	2024/09/16 11:44:28 Using namespace: kubernetes-dashboard
	2024/09/16 11:44:28 Using in-cluster config to connect to apiserver
	2024/09/16 11:44:28 Using secret token for csrf signing
	2024/09/16 11:44:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:44:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:44:28 Successful initial request to the apiserver, version: v1.20.0
	2024/09/16 11:44:28 Generating JWE encryption key
	2024/09/16 11:44:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:44:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:44:29 Initializing JWE encryption key from synchronized object
	2024/09/16 11:44:29 Creating in-cluster Sidecar client
	2024/09/16 11:44:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:44:29 Serving insecurely on HTTP port: 9090
	2024/09/16 11:44:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:45:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:45:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:46:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:46:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:47:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:47:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:48:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:48:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:49:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] <==
	I0916 11:42:33.942881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:42:33.952289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:42:33.952327       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:42:33.995195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:42:33.995263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88c65391-c353-4f97-bac8-9bd49b9f0588", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77 became leader
	I0916 11:42:33.995326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	I0916 11:42:34.095721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	I0916 11:44:05.490838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:44:05.500843       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:44:05.500889       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:44:22.921932       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:44:22.922027       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88c65391-c353-4f97-bac8-9bd49b9f0588", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-406673_3ba9e9fb-376f-4c9d-ac7a-117467cbcd44 became leader
	I0916 11:44:22.922079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_3ba9e9fb-376f-4c9d-ac7a-117467cbcd44!
	I0916 11:44:23.022817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_3ba9e9fb-376f-4c9d-ac7a-117467cbcd44!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (497.609µs)
helpers_test.go:263: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h95rv" [a69b94e2-51ee-4cb5-8692-7882d7361328] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003881736s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-406673 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (577.919µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-406673 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-406673
helpers_test.go:235: (dbg) docker inspect old-k8s-version-406673:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b",
	        "Created": "2024-09-16T11:41:15.966557614Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:43:41.856340647Z",
	            "FinishedAt": "2024-09-16T11:43:40.973630206Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/hosts",
	        "LogPath": "/var/lib/docker/containers/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b-json.log",
	        "Name": "/old-k8s-version-406673",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-406673:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-406673",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/450643ef93dc0ed33c6ffcd7647a4ed6ac8ab042487a09137d8b7879d836e5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-406673",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-406673/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-406673",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-406673",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56f8a2f3575a0a3b313b56d9db518474b2f321b76a887e55e0a93f6b40f9cac8",
	            "SandboxKey": "/var/run/docker/netns/56f8a2f3575a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-406673": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49cf3e3468396ba01b588ae85b5e7bcdf3e6dcfeb05d207136018542ad1d54df",
	                    "EndpointID": "09f66bc7471fc8394e9becd78bcf298cb4869abecddacfbc6a06bf8255a6855b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-406673",
	                        "28d6c5fc26a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-406673 logs -n 25: (1.217020058s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:40 UTC |                     |
	|         | sudo systemctl status docker                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat docker                              |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo cat                                               |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo docker system info                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                  |                           |         |         |                     |                     |
	|         | cri-docker --all --full                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat cri-docker                          |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                  |                           |         |         |                     |                     |
	|         | containerd --all --full                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                          |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service                 |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                               |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                             |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                            |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                           |         |         |                     |                     |
	|         | \;                                                     |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467 | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467     | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673    | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:43:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:43:41.448675  342599 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:43:41.449069  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:43:41.449083  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:43:41.449090  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:43:41.449520  342599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:43:41.450534  342599 out.go:352] Setting JSON to false
	I0916 11:43:41.451659  342599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5161,"bootTime":1726481860,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:43:41.451763  342599 start.go:139] virtualization: kvm guest
	I0916 11:43:41.454105  342599 out.go:177] * [old-k8s-version-406673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:43:41.455638  342599 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:43:41.455671  342599 notify.go:220] Checking for updates...
	I0916 11:43:41.458330  342599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:43:41.459636  342599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:41.460924  342599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:43:41.462503  342599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:43:41.464018  342599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:43:41.466022  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:41.468148  342599 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 11:43:41.469509  342599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:43:41.493994  342599 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:43:41.494082  342599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:43:41.552267  342599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:43:41.542033993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:43:41.552366  342599 docker.go:318] overlay module found
	I0916 11:43:41.554456  342599 out.go:177] * Using the docker driver based on existing profile
	I0916 11:43:41.555523  342599 start.go:297] selected driver: docker
	I0916 11:43:41.555540  342599 start.go:901] validating driver "docker" against &{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:41.555622  342599 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:43:41.556394  342599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:43:41.611358  342599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:43:41.600217835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:43:41.611712  342599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:43:41.611741  342599 cni.go:84] Creating CNI manager for ""
	I0916 11:43:41.611767  342599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:43:41.611800  342599 start.go:340] cluster config:
	{Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:41.614659  342599 out.go:177] * Starting "old-k8s-version-406673" primary control-plane node in "old-k8s-version-406673" cluster
	I0916 11:43:41.616047  342599 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:43:41.617540  342599 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:43:41.619066  342599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:43:41.619093  342599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:43:41.619118  342599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 11:43:41.619138  342599 cache.go:56] Caching tarball of preloaded images
	I0916 11:43:41.619235  342599 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:43:41.619248  342599 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 11:43:41.619349  342599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	W0916 11:43:41.640867  342599 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:43:41.640901  342599 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:43:41.641001  342599 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:43:41.641018  342599 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:43:41.641022  342599 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:43:41.641030  342599 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:43:41.641034  342599 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:43:41.718830  342599 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:43:41.718879  342599 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:43:41.718924  342599 start.go:360] acquireMachinesLock for old-k8s-version-406673: {Name:mk8e16c995170a3c051ae96503b85729d385d06f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:43:41.719008  342599 start.go:364] duration metric: took 59.119µs to acquireMachinesLock for "old-k8s-version-406673"
	I0916 11:43:41.719031  342599 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:43:41.719049  342599 fix.go:54] fixHost starting: 
	I0916 11:43:41.719280  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:41.737386  342599 fix.go:112] recreateIfNeeded on old-k8s-version-406673: state=Stopped err=<nil>
	W0916 11:43:41.737478  342599 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:43:41.739550  342599 out.go:177] * Restarting existing docker container for "old-k8s-version-406673" ...
	I0916 11:43:41.740931  342599 cli_runner.go:164] Run: docker start old-k8s-version-406673
	I0916 11:43:42.037870  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:42.057638  342599 kic.go:430] container "old-k8s-version-406673" state is running.
	I0916 11:43:42.058125  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:42.077127  342599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/config.json ...
	I0916 11:43:42.077438  342599 machine.go:93] provisionDockerMachine start ...
	I0916 11:43:42.077513  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:42.096731  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:42.096978  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:42.096997  342599 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:43:42.097660  342599 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48048->127.0.0.1:33093: read: connection reset by peer
	I0916 11:43:45.232865  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:43:45.232896  342599 ubuntu.go:169] provisioning hostname "old-k8s-version-406673"
	I0916 11:43:45.232959  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.254903  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.255229  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.255258  342599 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-406673 && echo "old-k8s-version-406673" | sudo tee /etc/hostname
	I0916 11:43:45.401461  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-406673
	
	I0916 11:43:45.401545  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.419533  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.419740  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.419760  342599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-406673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-406673/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-406673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:43:45.557487  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:43:45.557514  342599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:43:45.557560  342599 ubuntu.go:177] setting up certificates
	I0916 11:43:45.557573  342599 provision.go:84] configureAuth start
	I0916 11:43:45.557627  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:45.574760  342599 provision.go:143] copyHostCerts
	I0916 11:43:45.574844  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:43:45.574860  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:43:45.574945  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:43:45.575091  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:43:45.575105  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:43:45.575153  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:43:45.575244  342599 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:43:45.575255  342599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:43:45.575295  342599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:43:45.575376  342599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-406673 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-406673]
	I0916 11:43:45.748283  342599 provision.go:177] copyRemoteCerts
	I0916 11:43:45.748356  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:43:45.748393  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.765636  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:45.862269  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:43:45.885003  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:43:45.907169  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:43:45.931358  342599 provision.go:87] duration metric: took 373.76893ms to configureAuth
	I0916 11:43:45.931402  342599 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:43:45.931619  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:45.931737  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:45.950090  342599 main.go:141] libmachine: Using SSH client type: native
	I0916 11:43:45.950326  342599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:43:45.950350  342599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:43:46.250285  342599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:43:46.250314  342599 machine.go:96] duration metric: took 4.172856931s to provisionDockerMachine
	I0916 11:43:46.250329  342599 start.go:293] postStartSetup for "old-k8s-version-406673" (driver="docker")
	I0916 11:43:46.250342  342599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:43:46.250412  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:43:46.250460  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.269457  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.370592  342599 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:43:46.373854  342599 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:43:46.373887  342599 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:43:46.373895  342599 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:43:46.373901  342599 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:43:46.373912  342599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:43:46.373966  342599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:43:46.374049  342599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:43:46.374134  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:43:46.382190  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:43:46.404854  342599 start.go:296] duration metric: took 154.508203ms for postStartSetup
	I0916 11:43:46.404944  342599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:43:46.404984  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.423369  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.518250  342599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:43:46.522658  342599 fix.go:56] duration metric: took 4.803604453s for fixHost
	I0916 11:43:46.522684  342599 start.go:83] releasing machines lock for "old-k8s-version-406673", held for 4.803664456s
	I0916 11:43:46.522755  342599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-406673
	I0916 11:43:46.540413  342599 ssh_runner.go:195] Run: cat /version.json
	I0916 11:43:46.540463  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.540483  342599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:43:46.540550  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:46.559326  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.559343  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:46.649310  342599 ssh_runner.go:195] Run: systemctl --version
	I0916 11:43:46.731311  342599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:43:46.869148  342599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:43:46.873764  342599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:43:46.882554  342599 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:43:46.882626  342599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:43:46.891468  342599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:43:46.891491  342599 start.go:495] detecting cgroup driver to use...
	I0916 11:43:46.891523  342599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:43:46.891589  342599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:43:46.903563  342599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:43:46.914685  342599 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:43:46.914743  342599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:43:46.927471  342599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:43:46.938829  342599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:43:47.019225  342599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:43:47.095917  342599 docker.go:233] disabling docker service ...
	I0916 11:43:47.095984  342599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:43:47.108451  342599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:43:47.119842  342599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:43:47.196356  342599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:43:47.275282  342599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:43:47.286402  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:43:47.301909  342599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 11:43:47.301978  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.311648  342599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:43:47.311699  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.321003  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.330113  342599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:43:47.339110  342599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:43:47.348230  342599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:43:47.356509  342599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:43:47.364678  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:47.441764  342599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:43:47.538547  342599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:43:47.538607  342599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:43:47.542039  342599 start.go:563] Will wait 60s for crictl version
	I0916 11:43:47.542091  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:47.545302  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:43:47.578706  342599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:43:47.578785  342599 ssh_runner.go:195] Run: crio --version
	I0916 11:43:47.613962  342599 ssh_runner.go:195] Run: crio --version
	I0916 11:43:47.653182  342599 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0916 11:43:47.654482  342599 cli_runner.go:164] Run: docker network inspect old-k8s-version-406673 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:43:47.672357  342599 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:43:47.676229  342599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:43:47.687076  342599 kubeadm.go:883] updating cluster {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:43:47.687218  342599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:43:47.687280  342599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:43:47.727184  342599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:43:47.727258  342599 ssh_runner.go:195] Run: which lz4
	I0916 11:43:47.730999  342599 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:43:47.734265  342599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:43:47.734295  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 11:43:48.663263  342599 crio.go:462] duration metric: took 932.291429ms to copy over tarball
	I0916 11:43:48.663330  342599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:43:51.176610  342599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513253657s)
	I0916 11:43:51.176636  342599 crio.go:469] duration metric: took 2.513345828s to extract the tarball
	I0916 11:43:51.176643  342599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:43:51.248591  342599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:43:51.284423  342599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:43:51.284455  342599 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:43:51.284517  342599 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:51.284558  342599 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.284565  342599 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.284571  342599 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.284544  342599 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.284593  342599 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:43:51.284623  342599 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.284686  342599 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.285864  342599 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.285942  342599 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.285948  342599 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.285942  342599 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.285946  342599 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.286009  342599 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:43:51.286019  342599 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.286049  342599 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:51.492242  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.522713  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 11:43:51.534975  342599 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:43:51.535071  342599 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.535150  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.544750  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.545678  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.559215  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.568350  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.570259  342599 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:43:51.570308  342599 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:43:51.570346  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.570365  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.573562  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.622238  342599 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:43:51.622290  342599 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.622339  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.623682  342599 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:43:51.623772  342599 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.623841  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.757921  342599 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:43:51.757942  342599 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:43:51.757968  342599 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.757968  342599 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.758009  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758009  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758101  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.758165  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:51.758219  342599 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:43:51.758251  342599 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.758269  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.758285  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:43:51.758367  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:51.819059  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:51.819128  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:51.819135  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:51.819062  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:43:51.819186  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:51.819225  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:51.819239  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:52.005990  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:52.007566  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:43:52.012996  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:52.013008  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:43:52.013082  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:43:52.013133  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:43:52.013213  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:52.113680  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:43:52.201435  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:43:52.201538  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:43:52.206771  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:43:52.208106  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:43:52.208187  342599 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:43:52.225734  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:43:52.299412  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:43:52.299468  342599 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:43:52.378199  342599 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:52.517048  342599 cache_images.go:92] duration metric: took 1.232574481s to LoadCachedImages
	W0916 11:43:52.517148  342599 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0916 11:43:52.517167  342599 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 crio true true} ...
	I0916 11:43:52.517302  342599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-406673 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:43:52.517418  342599 ssh_runner.go:195] Run: crio config
	I0916 11:43:52.561512  342599 cni.go:84] Creating CNI manager for ""
	I0916 11:43:52.561534  342599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:43:52.561543  342599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:43:52.561561  342599 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-406673 NodeName:old-k8s-version-406673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:43:52.561689  342599 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-406673"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:43:52.561758  342599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:43:52.570704  342599 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:43:52.570772  342599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:43:52.579313  342599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0916 11:43:52.596268  342599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:43:52.612866  342599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0916 11:43:52.629581  342599 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:43:52.632853  342599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:43:52.643379  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:52.720660  342599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:43:52.734195  342599 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673 for IP: 192.168.103.2
	I0916 11:43:52.734216  342599 certs.go:194] generating shared ca certs ...
	I0916 11:43:52.734231  342599 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:52.734355  342599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:43:52.734391  342599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:43:52.734402  342599 certs.go:256] generating profile certs ...
	I0916 11:43:52.734473  342599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.key
	I0916 11:43:52.734530  342599 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key.13b4f1db
	I0916 11:43:52.734564  342599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key
	I0916 11:43:52.734710  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:43:52.734744  342599 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:43:52.734754  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:43:52.734773  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:43:52.734795  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:43:52.734814  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:43:52.734850  342599 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:43:52.735413  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:43:52.758887  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:43:52.782936  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:43:52.810335  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:43:52.835181  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:43:52.858252  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:43:52.880337  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:43:52.903907  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:43:52.927676  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:43:52.950944  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:43:52.974697  342599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:43:52.997934  342599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:43:53.016161  342599 ssh_runner.go:195] Run: openssl version
	I0916 11:43:53.021716  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:43:53.032092  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.035726  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.035794  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:43:53.042425  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:43:53.050857  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:43:53.059886  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.063252  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.063300  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:43:53.069514  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:43:53.078142  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:43:53.087290  342599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.090824  342599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.090896  342599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:43:53.097688  342599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:43:53.106525  342599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:43:53.109881  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:43:53.116612  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:43:53.123543  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:43:53.130272  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:43:53.136649  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:43:53.143689  342599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:43:53.151260  342599 kubeadm.go:392] StartCluster: {Name:old-k8s-version-406673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-406673 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:43:53.151380  342599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:43:53.151472  342599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:43:53.185768  342599 cri.go:89] found id: ""
	I0916 11:43:53.185846  342599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:43:53.194666  342599 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:43:53.194693  342599 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:43:53.194743  342599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:43:53.203055  342599 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:43:53.203881  342599 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-406673" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:53.204510  342599 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-406673" cluster setting kubeconfig missing "old-k8s-version-406673" context setting]
	I0916 11:43:53.205412  342599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.206930  342599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:43:53.215880  342599 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 11:43:53.215923  342599 kubeadm.go:597] duration metric: took 21.223045ms to restartPrimaryControlPlane
	I0916 11:43:53.215932  342599 kubeadm.go:394] duration metric: took 64.683125ms to StartCluster
	I0916 11:43:53.215949  342599 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.216018  342599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:43:53.218206  342599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:43:53.218661  342599 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:43:53.219512  342599 config.go:182] Loaded profile config "old-k8s-version-406673": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:43:53.219410  342599 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:43:53.219686  342599 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-406673"
	I0916 11:43:53.219705  342599 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-406673"
	W0916 11:43:53.219717  342599 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:43:53.219747  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.219785  342599 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-406673"
	I0916 11:43:53.219883  342599 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-406673"
	I0916 11:43:53.219823  342599 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-406673"
	I0916 11:43:53.220280  342599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-406673"
	I0916 11:43:53.219834  342599 addons.go:69] Setting dashboard=true in profile "old-k8s-version-406673"
	I0916 11:43:53.220375  342599 addons.go:234] Setting addon dashboard=true in "old-k8s-version-406673"
	W0916 11:43:53.220386  342599 addons.go:243] addon dashboard should already be in state true
	I0916 11:43:53.220422  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	W0916 11:43:53.220260  342599 addons.go:243] addon metrics-server should already be in state true
	I0916 11:43:53.220488  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.220653  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220710  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220869  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.220926  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.221032  342599 out.go:177] * Verifying Kubernetes components...
	I0916 11:43:53.222752  342599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:43:53.244346  342599 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-406673"
	W0916 11:43:53.244373  342599 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:43:53.244398  342599 host.go:66] Checking if "old-k8s-version-406673" exists ...
	I0916 11:43:53.244751  342599 cli_runner.go:164] Run: docker container inspect old-k8s-version-406673 --format={{.State.Status}}
	I0916 11:43:53.245037  342599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:43:53.246474  342599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:43:53.246481  342599 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:43:53.248096  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:43:53.248127  342599 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:43:53.248185  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.248192  342599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.248201  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:43:53.248098  342599 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:43:53.248252  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.250338  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:43:53.250359  342599 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:43:53.250404  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.273873  342599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:53.273898  342599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:43:53.273955  342599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-406673
	I0916 11:43:53.274169  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.275302  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.280036  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.301411  342599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/old-k8s-version-406673/id_rsa Username:docker}
	I0916 11:43:53.328656  342599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:43:53.340523  342599 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:43:53.387478  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:43:53.387506  342599 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:43:53.387745  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.396664  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:43:53.396691  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:43:53.406440  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:43:53.406463  342599 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:43:53.407903  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:53.416422  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:43:53.416449  342599 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:43:53.427712  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:43:53.427740  342599 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:43:53.439315  342599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:53.439342  342599 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:43:53.503707  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:43:53.503732  342599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 11:43:53.510579  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:53.525664  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:43:53.525696  342599 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:43:53.525914  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.525944  342599 retry.go:31] will retry after 152.87848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.532836  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.532872  342599 retry.go:31] will retry after 157.07542ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.601969  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:43:53.601994  342599 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:43:53.621346  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:43:53.621373  342599 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 11:43:53.634937  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.634974  342599 retry.go:31] will retry after 321.390454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.639540  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:43:53.639567  342599 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:43:53.656867  342599 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:53.656893  342599 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:43:53.673744  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:53.679888  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:43:53.691095  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:53.745183  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.745217  342599 retry.go:31] will retry after 136.130565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.796348  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.796382  342599 retry.go:31] will retry after 443.518837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:53.810771  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.810811  342599 retry.go:31] will retry after 382.546252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.881722  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:53.941956  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.941994  342599 retry.go:31] will retry after 236.364167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:53.957151  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:43:54.015814  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.015853  342599 retry.go:31] will retry after 375.113173ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.179194  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:43:54.193519  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:43:54.240911  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:54.252866  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.252918  342599 retry.go:31] will retry after 401.151273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.296437  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.296479  342599 retry.go:31] will retry after 764.07049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.333432  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.333478  342599 retry.go:31] will retry after 477.82927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.392081  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:43:54.451932  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.451973  342599 retry.go:31] will retry after 337.169739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.654238  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:54.712692  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.712728  342599 retry.go:31] will retry after 935.95517ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.789893  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:54.812303  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:54.852000  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.852037  342599 retry.go:31] will retry after 1.132792971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:54.874248  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:54.874282  342599 retry.go:31] will retry after 1.153231222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.061616  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:55.118580  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.118616  342599 retry.go:31] will retry after 952.42092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.341220  342599 node_ready.go:53] error getting node "old-k8s-version-406673": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-406673": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:43:55.649816  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:55.707503  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.707546  342599 retry.go:31] will retry after 1.525466419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:55.985469  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:56.027729  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:56.048118  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.048158  342599 retry.go:31] will retry after 1.537917974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.071232  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:56.087643  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.087676  342599 retry.go:31] will retry after 1.497738328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:56.130041  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:56.130083  342599 retry.go:31] will retry after 1.703517602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.233406  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:57.294430  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.294464  342599 retry.go:31] will retry after 1.40258396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.342100  342599 node_ready.go:53] error getting node "old-k8s-version-406673": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-406673": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:43:57.586456  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:43:57.586462  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:43:57.646094  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.646123  342599 retry.go:31] will retry after 1.833576806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:43:57.646162  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.646188  342599 retry.go:31] will retry after 2.656765994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.834560  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:43:57.892906  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:57.892939  342599 retry.go:31] will retry after 2.18125411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:58.698022  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:43:58.758259  342599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:58.758297  342599 retry.go:31] will retry after 1.653760659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:43:59.480055  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:44:00.074833  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:44:00.303327  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:44:00.413145  342599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:44:03.516026  342599 node_ready.go:49] node "old-k8s-version-406673" has status "Ready":"True"
	I0916 11:44:03.516063  342599 node_ready.go:38] duration metric: took 10.17550256s for node "old-k8s-version-406673" to be "Ready" ...
	I0916 11:44:03.516076  342599 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:44:03.717989  342599 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.816595  342599 pod_ready.go:93] pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:03.816691  342599 pod_ready.go:82] duration metric: took 98.666189ms for pod "coredns-74ff55c5b-6xlgw" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.816719  342599 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.918233  342599 pod_ready.go:93] pod "etcd-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:03.918276  342599 pod_ready.go:82] duration metric: took 101.538159ms for pod "etcd-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:03.918295  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:04.620945  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.140838756s)
	I0916 11:44:04.621040  342599 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-406673"
	I0916 11:44:04.621047  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.317689547s)
	I0916 11:44:04.620999  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.546120779s)
	I0916 11:44:04.898187  342599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.484990296s)
	I0916 11:44:04.900305  342599 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-406673 addons enable metrics-server
	
	I0916 11:44:04.901863  342599 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0916 11:44:04.903406  342599 addons.go:510] duration metric: took 11.683989587s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0916 11:44:05.923500  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:07.924626  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:09.926452  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:12.424233  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:14.924223  342599 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:15.423797  342599 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:44:15.423828  342599 pod_ready.go:82] duration metric: took 11.505525488s for pod "kube-apiserver-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:15.423838  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:44:17.429733  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:19.430224  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:21.929713  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:24.009627  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:26.430326  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:28.430726  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:30.930780  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:33.433263  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:35.929752  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:37.929837  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:39.930279  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:41.930540  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:43.930791  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:46.429510  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:48.430161  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:50.430295  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:52.929990  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:54.930580  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:57.429547  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:44:59.430191  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:01.930680  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:04.430050  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:06.431610  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:08.929699  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:11.430903  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:13.929747  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:15.931063  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:18.430474  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:20.929144  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:22.929833  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:24.930407  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:26.931153  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:29.430186  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:31.929901  342599 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:34.430487  342599 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.430512  342599 pod_ready.go:82] duration metric: took 1m19.006667807s for pod "kube-controller-manager-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.430523  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.435258  342599 pod_ready.go:93] pod "kube-proxy-pcbvp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.435281  342599 pod_ready.go:82] duration metric: took 4.751917ms for pod "kube-proxy-pcbvp" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.435290  342599 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.439468  342599 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace has status "Ready":"True"
	I0916 11:45:34.439490  342599 pod_ready.go:82] duration metric: took 4.192562ms for pod "kube-scheduler-old-k8s-version-406673" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:34.439505  342599 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace to be "Ready" ...
	I0916 11:45:36.445827  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:38.946013  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:41.444852  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:43.445737  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:45.946748  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:48.445118  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:50.445210  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:52.445816  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:54.446068  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:56.945501  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:45:58.945685  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:01.445377  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:03.445752  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:05.945806  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:08.446010  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:10.446073  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:12.945844  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:15.446131  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:17.946289  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:20.445864  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:22.445951  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:24.946488  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:27.445839  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:29.945436  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:31.945951  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:33.947646  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:36.445905  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:38.948094  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:41.446271  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:43.978003  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:46.445688  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:48.946292  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:51.445713  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:53.945072  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:55.945739  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:46:58.445191  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:00.445680  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:02.446254  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:04.946036  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:07.447667  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:09.945983  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:12.445228  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:14.445689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:16.445931  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:18.945281  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:20.945433  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:22.946291  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:25.444655  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:27.445696  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:29.445774  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:31.945999  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:34.444676  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:36.445444  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:38.945689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:41.446060  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:43.948656  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:46.445159  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:48.946051  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:51.446010  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:53.446145  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:55.945438  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:57.945706  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:47:59.946103  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:02.445233  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:04.945988  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:07.445200  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:09.446085  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:11.944825  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:13.945689  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:16.444784  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:18.444860  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:20.445186  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:22.447125  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:24.945528  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:26.945691  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:29.446345  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:31.945589  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:34.444967  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:36.445485  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:38.945937  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:41.445492  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:43.445794  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:45.945563  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:48.445313  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:50.946012  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:53.445570  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:55.947554  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:48:58.445126  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:00.945813  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:02.946300  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:05.445265  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:07.446242  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:09.946173  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:12.446147  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:14.945283  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:17.447088  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:19.945240  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:21.945474  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:24.445814  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:26.945457  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:29.445643  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:31.945681  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:34.445158  342599 pod_ready.go:103] pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace has status "Ready":"False"
	I0916 11:49:34.445185  342599 pod_ready.go:82] duration metric: took 4m0.005672608s for pod "metrics-server-9975d5f86-zkwwm" in "kube-system" namespace to be "Ready" ...
	E0916 11:49:34.445196  342599 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:49:34.445205  342599 pod_ready.go:39] duration metric: took 5m30.929118215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:49:34.445222  342599 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:49:34.445252  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:49:34.445299  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:49:34.479712  342599 cri.go:89] found id: "f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:34.479738  342599 cri.go:89] found id: ""
	I0916 11:49:34.479748  342599 logs.go:276] 1 containers: [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d]
	I0916 11:49:34.479800  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.483247  342599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:49:34.483318  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:49:34.517155  342599 cri.go:89] found id: "7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:34.517180  342599 cri.go:89] found id: ""
	I0916 11:49:34.517188  342599 logs.go:276] 1 containers: [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6]
	I0916 11:49:34.517247  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.520774  342599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:49:34.520856  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:49:34.554354  342599 cri.go:89] found id: "97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:34.554377  342599 cri.go:89] found id: ""
	I0916 11:49:34.554387  342599 logs.go:276] 1 containers: [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84]
	I0916 11:49:34.554452  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.557960  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:49:34.558017  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:49:34.594211  342599 cri.go:89] found id: "0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:34.594233  342599 cri.go:89] found id: ""
	I0916 11:49:34.594241  342599 logs.go:276] 1 containers: [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f]
	I0916 11:49:34.594291  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.597717  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:49:34.597782  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:49:34.631348  342599 cri.go:89] found id: "5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:34.631372  342599 cri.go:89] found id: ""
	I0916 11:49:34.631382  342599 logs.go:276] 1 containers: [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849]
	I0916 11:49:34.631438  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.634962  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:49:34.635076  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:49:34.668370  342599 cri.go:89] found id: "b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:34.668392  342599 cri.go:89] found id: ""
	I0916 11:49:34.668401  342599 logs.go:276] 1 containers: [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19]
	I0916 11:49:34.668456  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.671903  342599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:49:34.671964  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:49:34.707573  342599 cri.go:89] found id: "368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:34.707601  342599 cri.go:89] found id: ""
	I0916 11:49:34.707611  342599 logs.go:276] 1 containers: [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4]
	I0916 11:49:34.707658  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.711089  342599 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:49:34.711146  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:49:34.746008  342599 cri.go:89] found id: "97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:34.746034  342599 cri.go:89] found id: ""
	I0916 11:49:34.746041  342599 logs.go:276] 1 containers: [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf]
	I0916 11:49:34.746091  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.749832  342599 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:49:34.749936  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:49:34.782428  342599 cri.go:89] found id: "5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:34.782453  342599 cri.go:89] found id: ""
	I0916 11:49:34.782462  342599 logs.go:276] 1 containers: [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd]
	I0916 11:49:34.782512  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:34.786501  342599 logs.go:123] Gathering logs for dmesg ...
	I0916 11:49:34.786532  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:49:34.807221  342599 logs.go:123] Gathering logs for kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] ...
	I0916 11:49:34.807251  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:34.843519  342599 logs.go:123] Gathering logs for kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] ...
	I0916 11:49:34.843550  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:34.904038  342599 logs.go:123] Gathering logs for kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] ...
	I0916 11:49:34.904072  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:34.938520  342599 logs.go:123] Gathering logs for kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] ...
	I0916 11:49:34.938549  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:34.976046  342599 logs.go:123] Gathering logs for storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] ...
	I0916 11:49:34.976077  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:35.011710  342599 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:49:35.011741  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:49:35.076295  342599 logs.go:123] Gathering logs for kubelet ...
	I0916 11:49:35.076330  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:49:35.115636  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.608970    1237 reflector.go:138] object-"kube-system"/"storage-provisioner-token-767ft": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-767ft" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.115819  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609271    1237 reflector.go:138] object-"kube-system"/"coredns-token-75kvx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-75kvx" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116040  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609320    1237 reflector.go:138] object-"kube-system"/"metrics-server-token-2vx2d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2vx2d" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116205  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609457    1237 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.116360  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609499    1237 reflector.go:138] object-"kube-system"/"kindnet-token-c5qt9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-c5qt9" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:35.123705  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:09 old-k8s-version-406673 kubelet[1237]: E0916 11:44:09.475464    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.123850  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:10 old-k8s-version-406673 kubelet[1237]: E0916 11:44:10.312296    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.126870  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:33 old-k8s-version-406673 kubelet[1237]: E0916 11:44:33.264025    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.127338  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:34 old-k8s-version-406673 kubelet[1237]: E0916 11:44:34.404862    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127612  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:35 old-k8s-version-406673 kubelet[1237]: E0916 11:44:35.407622    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127855  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:39 old-k8s-version-406673 kubelet[1237]: E0916 11:44:39.894316    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.127989  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:44 old-k8s-version-406673 kubelet[1237]: E0916 11:44:44.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.128412  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:55 old-k8s-version-406673 kubelet[1237]: E0916 11:44:55.437796    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.129943  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:58 old-k8s-version-406673 kubelet[1237]: E0916 11:44:58.310102    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.130185  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:59 old-k8s-version-406673 kubelet[1237]: E0916 11:44:59.894304    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.130422  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.205817    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.130556  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.206178    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.130984  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:24 old-k8s-version-406673 kubelet[1237]: E0916 11:45:24.482238    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.131116  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:25 old-k8s-version-406673 kubelet[1237]: E0916 11:45:25.206099    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.131360  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:29 old-k8s-version-406673 kubelet[1237]: E0916 11:45:29.894364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.131500  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:37 old-k8s-version-406673 kubelet[1237]: E0916 11:45:37.206069    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.131753  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:42 old-k8s-version-406673 kubelet[1237]: E0916 11:45:42.205686    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.133205  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:51 old-k8s-version-406673 kubelet[1237]: E0916 11:45:51.269661    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.133479  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:56 old-k8s-version-406673 kubelet[1237]: E0916 11:45:56.206262    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.133630  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:03 old-k8s-version-406673 kubelet[1237]: E0916 11:46:03.206044    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134062  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:11 old-k8s-version-406673 kubelet[1237]: E0916 11:46:11.550095    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134197  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:18 old-k8s-version-406673 kubelet[1237]: E0916 11:46:18.206493    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134434  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:19 old-k8s-version-406673 kubelet[1237]: E0916 11:46:19.894425    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134687  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.205670    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.134821  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.206071    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.134958  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:42 old-k8s-version-406673 kubelet[1237]: E0916 11:46:42.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.135210  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:44 old-k8s-version-406673 kubelet[1237]: E0916 11:46:44.205741    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.135344  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:56 old-k8s-version-406673 kubelet[1237]: E0916 11:46:56.206336    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.135580  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:58 old-k8s-version-406673 kubelet[1237]: E0916 11:46:58.207462    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.136101  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:11 old-k8s-version-406673 kubelet[1237]: E0916 11:47:11.206125    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.136340  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:12 old-k8s-version-406673 kubelet[1237]: E0916 11:47:12.205756    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.137850  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:22 old-k8s-version-406673 kubelet[1237]: E0916 11:47:22.276097    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:35.138089  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:24 old-k8s-version-406673 kubelet[1237]: E0916 11:47:24.205721    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.138317  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.206240    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.138647  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.670364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.138886  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:39 old-k8s-version-406673 kubelet[1237]: E0916 11:47:39.894246    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139020  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:46 old-k8s-version-406673 kubelet[1237]: E0916 11:47:46.206145    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139257  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:52 old-k8s-version-406673 kubelet[1237]: E0916 11:47:52.205673    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139390  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:57 old-k8s-version-406673 kubelet[1237]: E0916 11:47:57.206159    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139625  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:07 old-k8s-version-406673 kubelet[1237]: E0916 11:48:07.205557    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.139761  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:08 old-k8s-version-406673 kubelet[1237]: E0916 11:48:08.206452    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.139894  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:19 old-k8s-version-406673 kubelet[1237]: E0916 11:48:19.206101    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.140128  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: E0916 11:48:22.205857    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140267  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.140523  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140778  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.140950  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.141221  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.141578  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.141892  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.142029  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.142265  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.142397  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:35.142408  342599 logs.go:123] Gathering logs for etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] ...
	I0916 11:49:35.142422  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:35.182113  342599 logs.go:123] Gathering logs for kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] ...
	I0916 11:49:35.182144  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:35.223823  342599 logs.go:123] Gathering logs for container status ...
	I0916 11:49:35.223856  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:49:35.262634  342599 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:49:35.262663  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:49:35.367246  342599 logs.go:123] Gathering logs for coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] ...
	I0916 11:49:35.367278  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:35.402793  342599 logs.go:123] Gathering logs for kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] ...
	I0916 11:49:35.402829  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:35.462604  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:35.462635  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:49:35.462715  342599 out.go:270] X Problems detected in kubelet:
	W0916 11:49:35.462728  342599 out.go:270]   Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.462739  342599 out.go:270]   Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.462755  342599 out.go:270]   Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:35.462770  342599 out.go:270]   Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:35.462780  342599 out.go:270]   Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:35.462788  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:35.462799  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:49:45.464328  342599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:49:45.476151  342599 api_server.go:72] duration metric: took 5m52.257437357s to wait for apiserver process to appear ...
	I0916 11:49:45.476182  342599 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:49:45.476243  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:49:45.476303  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:49:45.512448  342599 cri.go:89] found id: "f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:45.512475  342599 cri.go:89] found id: ""
	I0916 11:49:45.512483  342599 logs.go:276] 1 containers: [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d]
	I0916 11:49:45.512531  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.516037  342599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:49:45.516112  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:49:45.549762  342599 cri.go:89] found id: "7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:45.549791  342599 cri.go:89] found id: ""
	I0916 11:49:45.549801  342599 logs.go:276] 1 containers: [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6]
	I0916 11:49:45.549848  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.553456  342599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:49:45.553520  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:49:45.587005  342599 cri.go:89] found id: "97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:45.587029  342599 cri.go:89] found id: ""
	I0916 11:49:45.587038  342599 logs.go:276] 1 containers: [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84]
	I0916 11:49:45.587095  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.590764  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:49:45.590840  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:49:45.623784  342599 cri.go:89] found id: "0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:45.623809  342599 cri.go:89] found id: ""
	I0916 11:49:45.623818  342599 logs.go:276] 1 containers: [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f]
	I0916 11:49:45.623891  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.627377  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:49:45.627428  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:49:45.660479  342599 cri.go:89] found id: "5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:45.660505  342599 cri.go:89] found id: ""
	I0916 11:49:45.660513  342599 logs.go:276] 1 containers: [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849]
	I0916 11:49:45.660575  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.664047  342599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:49:45.664102  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:49:45.699816  342599 cri.go:89] found id: "b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:45.699842  342599 cri.go:89] found id: ""
	I0916 11:49:45.699851  342599 logs.go:276] 1 containers: [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19]
	I0916 11:49:45.699906  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.703371  342599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:49:45.703425  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:49:45.736043  342599 cri.go:89] found id: "368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:45.736062  342599 cri.go:89] found id: ""
	I0916 11:49:45.736069  342599 logs.go:276] 1 containers: [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4]
	I0916 11:49:45.736110  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.739784  342599 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:49:45.739851  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:49:45.772325  342599 cri.go:89] found id: "5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:45.772352  342599 cri.go:89] found id: ""
	I0916 11:49:45.772362  342599 logs.go:276] 1 containers: [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd]
	I0916 11:49:45.772418  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.775808  342599 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:49:45.775861  342599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:49:45.809227  342599 cri.go:89] found id: "97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:45.809253  342599 cri.go:89] found id: ""
	I0916 11:49:45.809261  342599 logs.go:276] 1 containers: [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf]
	I0916 11:49:45.809321  342599 ssh_runner.go:195] Run: which crictl
	I0916 11:49:45.812839  342599 logs.go:123] Gathering logs for etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] ...
	I0916 11:49:45.812865  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6"
	I0916 11:49:45.851277  342599 logs.go:123] Gathering logs for kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] ...
	I0916 11:49:45.851308  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d"
	I0916 11:49:45.909692  342599 logs.go:123] Gathering logs for container status ...
	I0916 11:49:45.909724  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:49:45.950858  342599 logs.go:123] Gathering logs for kubelet ...
	I0916 11:49:45.950886  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:49:45.989747  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.608970    1237 reflector.go:138] object-"kube-system"/"storage-provisioner-token-767ft": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-767ft" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.989949  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609271    1237 reflector.go:138] object-"kube-system"/"coredns-token-75kvx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-75kvx" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990131  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609320    1237 reflector.go:138] object-"kube-system"/"metrics-server-token-2vx2d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2vx2d" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990299  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609457    1237 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.990482  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:03 old-k8s-version-406673 kubelet[1237]: E0916 11:44:03.609499    1237 reflector.go:138] object-"kube-system"/"kindnet-token-c5qt9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-c5qt9" is forbidden: User "system:node:old-k8s-version-406673" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-406673' and this object
	W0916 11:49:45.998668  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:09 old-k8s-version-406673 kubelet[1237]: E0916 11:44:09.475464    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:45.998853  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:10 old-k8s-version-406673 kubelet[1237]: E0916 11:44:10.312296    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.002222  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:33 old-k8s-version-406673 kubelet[1237]: E0916 11:44:33.264025    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.002809  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:34 old-k8s-version-406673 kubelet[1237]: E0916 11:44:34.404862    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003157  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:35 old-k8s-version-406673 kubelet[1237]: E0916 11:44:35.407622    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003473  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:39 old-k8s-version-406673 kubelet[1237]: E0916 11:44:39.894316    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.003636  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:44 old-k8s-version-406673 kubelet[1237]: E0916 11:44:44.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.004142  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:55 old-k8s-version-406673 kubelet[1237]: E0916 11:44:55.437796    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.005728  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:58 old-k8s-version-406673 kubelet[1237]: E0916 11:44:58.310102    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.005999  342599 logs.go:138] Found kubelet problem: Sep 16 11:44:59 old-k8s-version-406673 kubelet[1237]: E0916 11:44:59.894304    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.006254  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.205817    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.006403  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:13 old-k8s-version-406673 kubelet[1237]: E0916 11:45:13.206178    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.006850  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:24 old-k8s-version-406673 kubelet[1237]: E0916 11:45:24.482238    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.007003  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:25 old-k8s-version-406673 kubelet[1237]: E0916 11:45:25.206099    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.007264  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:29 old-k8s-version-406673 kubelet[1237]: E0916 11:45:29.894364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.007412  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:37 old-k8s-version-406673 kubelet[1237]: E0916 11:45:37.206069    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.007693  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:42 old-k8s-version-406673 kubelet[1237]: E0916 11:45:42.205686    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.009255  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:51 old-k8s-version-406673 kubelet[1237]: E0916 11:45:51.269661    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.009575  342599 logs.go:138] Found kubelet problem: Sep 16 11:45:56 old-k8s-version-406673 kubelet[1237]: E0916 11:45:56.206262    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.009750  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:03 old-k8s-version-406673 kubelet[1237]: E0916 11:46:03.206044    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.010204  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:11 old-k8s-version-406673 kubelet[1237]: E0916 11:46:11.550095    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.010352  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:18 old-k8s-version-406673 kubelet[1237]: E0916 11:46:18.206493    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.010606  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:19 old-k8s-version-406673 kubelet[1237]: E0916 11:46:19.894425    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.010860  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.205670    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.011011  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:30 old-k8s-version-406673 kubelet[1237]: E0916 11:46:30.206071    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011162  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:42 old-k8s-version-406673 kubelet[1237]: E0916 11:46:42.206193    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011420  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:44 old-k8s-version-406673 kubelet[1237]: E0916 11:46:44.205741    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.011600  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:56 old-k8s-version-406673 kubelet[1237]: E0916 11:46:56.206336    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.011878  342599 logs.go:138] Found kubelet problem: Sep 16 11:46:58 old-k8s-version-406673 kubelet[1237]: E0916 11:46:58.207462    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.012463  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:11 old-k8s-version-406673 kubelet[1237]: E0916 11:47:11.206125    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.012710  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:12 old-k8s-version-406673 kubelet[1237]: E0916 11:47:12.205756    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.014264  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:22 old-k8s-version-406673 kubelet[1237]: E0916 11:47:22.276097    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0916 11:49:46.014506  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:24 old-k8s-version-406673 kubelet[1237]: E0916 11:47:24.205721    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.014752  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.206240    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.015120  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:35 old-k8s-version-406673 kubelet[1237]: E0916 11:47:35.670364    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015359  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:39 old-k8s-version-406673 kubelet[1237]: E0916 11:47:39.894246    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015495  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:46 old-k8s-version-406673 kubelet[1237]: E0916 11:47:46.206145    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.015737  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:52 old-k8s-version-406673 kubelet[1237]: E0916 11:47:52.205673    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.015874  342599 logs.go:138] Found kubelet problem: Sep 16 11:47:57 old-k8s-version-406673 kubelet[1237]: E0916 11:47:57.206159    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016114  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:07 old-k8s-version-406673 kubelet[1237]: E0916 11:48:07.205557    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.016247  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:08 old-k8s-version-406673 kubelet[1237]: E0916 11:48:08.206452    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016379  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:19 old-k8s-version-406673 kubelet[1237]: E0916 11:48:19.206101    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016615  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:22 old-k8s-version-406673 kubelet[1237]: E0916 11:48:22.205857    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.016751  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.016990  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017226  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017396  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.017637  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.017975  342599 logs.go:138] Found kubelet problem: Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.018314  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.018532  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.018808  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.018949  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.019188  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.019329  342599 logs.go:138] Found kubelet problem: Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:46.019344  342599 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:49:46.019362  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:49:46.120596  342599 logs.go:123] Gathering logs for kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] ...
	I0916 11:49:46.120625  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849"
	I0916 11:49:46.155238  342599 logs.go:123] Gathering logs for kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] ...
	I0916 11:49:46.155276  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4"
	I0916 11:49:46.196278  342599 logs.go:123] Gathering logs for kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] ...
	I0916 11:49:46.196315  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf"
	I0916 11:49:46.231618  342599 logs.go:123] Gathering logs for dmesg ...
	I0916 11:49:46.231644  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:49:46.251725  342599 logs.go:123] Gathering logs for coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] ...
	I0916 11:49:46.251757  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84"
	I0916 11:49:46.285166  342599 logs.go:123] Gathering logs for kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] ...
	I0916 11:49:46.285195  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f"
	I0916 11:49:46.322323  342599 logs.go:123] Gathering logs for kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] ...
	I0916 11:49:46.322354  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19"
	I0916 11:49:46.383562  342599 logs.go:123] Gathering logs for storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] ...
	I0916 11:49:46.383598  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd"
	I0916 11:49:46.419476  342599 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:49:46.419504  342599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:49:46.486057  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:46.486091  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:49:46.486149  342599 out.go:270] X Problems detected in kubelet:
	W0916 11:49:46.486160  342599 out.go:270]   Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.486167  342599 out.go:270]   Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.486178  342599 out.go:270]   Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:49:46.486186  342599 out.go:270]   Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	W0916 11:49:46.486192  342599 out.go:270]   Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0916 11:49:46.486197  342599 out.go:358] Setting ErrFile to fd 2...
	I0916 11:49:46.486202  342599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:49:56.486729  342599 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:49:56.492442  342599 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:49:56.494563  342599 out.go:201] 
	W0916 11:49:56.495936  342599 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 11:49:56.495972  342599 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 11:49:56.495996  342599 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 11:49:56.496004  342599 out.go:270] * 
	W0916 11:49:56.496790  342599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 11:49:56.498603  342599 out.go:201] 
	
	
	==> CRI-O <==
	Sep 16 11:47:46 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:46.205909613Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e46d1883-df3c-478c-9b52-43b1f4b66b53 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:47:57 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:57.205590762Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0ee50556-8e10-451a-9d07-2ae4c6c9996b name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:47:57 old-k8s-version-406673 crio[658]: time="2024-09-16 11:47:57.205872422Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0ee50556-8e10-451a-9d07-2ae4c6c9996b name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:08 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:08.205927770Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0fda9557-920b-416b-aed1-8828429a423a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:08 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:08.206213666Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0fda9557-920b-416b-aed1-8828429a423a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:19 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:19.205516445Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6b61ba83-a7fe-4777-9ceb-3deed76a75b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:19 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:19.205792802Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6b61ba83-a7fe-4777-9ceb-3deed76a75b2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:30 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:30.205546758Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=225a3e9f-894f-4bed-9dc9-0f06dd504068 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:30 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:30.205816141Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=225a3e9f-894f-4bed-9dc9-0f06dd504068 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:45 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:45.205689456Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4594f766-f5f3-4768-8edb-2b6cacb02166 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:45 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:45.205912921Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4594f766-f5f3-4768-8edb-2b6cacb02166 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:58 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:58.167754509Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=5ce19d5a-f1e4-4a03-9328-16c1753b7070 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:58 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:58.168027574Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5ce19d5a-f1e4-4a03-9328-16c1753b7070 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:59 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:59.205587660Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=46e5b126-4bc5-4a49-8299-5156fb3752e8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:48:59 old-k8s-version-406673 crio[658]: time="2024-09-16 11:48:59.205837574Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=46e5b126-4bc5-4a49-8299-5156fb3752e8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:13 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:13.205588860Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ba709834-5cdd-4321-8fb8-da4780efcc01 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:13 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:13.205818502Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ba709834-5cdd-4321-8fb8-da4780efcc01 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:27 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:27.205589711Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6eea8117-c121-4212-9d87-2ba516071584 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:27 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:27.205844824Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6eea8117-c121-4212-9d87-2ba516071584 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:39 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:39.205649013Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1f2d6c84-3fdc-4849-9820-4b277fde2583 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:39 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:39.205947544Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1f2d6c84-3fdc-4849-9820-4b277fde2583 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:50 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:50.205575795Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a21fdcfb-e6e2-419b-97b4-b89ff6910578 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:49:50 old-k8s-version-406673 crio[658]: time="2024-09-16 11:49:50.205891848Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a21fdcfb-e6e2-419b-97b4-b89ff6910578 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:50:02 old-k8s-version-406673 crio[658]: time="2024-09-16 11:50:02.205685193Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=fba6d4d9-19cf-46b9-9d9c-57db08ae7078 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 16 11:50:02 old-k8s-version-406673 crio[658]: time="2024-09-16 11:50:02.205984060Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fba6d4d9-19cf-46b9-9d9c-57db08ae7078 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	16a2fe5b8b22e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   00002de23c0ba       dashboard-metrics-scraper-8d5bb5db8-dxnqs
	97a484780a356       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   12e93458aef4e       kubernetes-dashboard-cd95d586-h95rv
	368f056913391       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b         6 minutes ago       Running             kindnet-cni                 0                   a2e083bcb0a1a       kindnet-mjcgf
	97fdc1e66b0e4       bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16                                           6 minutes ago       Running             coredns                     0                   7039bf8d6d58b       coredns-74ff55c5b-6xlgw
	5847ee074474b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           6 minutes ago       Running             storage-provisioner         0                   c68dec692823c       storage-provisioner
	5685724a36b6b       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc                                           6 minutes ago       Running             kube-proxy                  0                   9e4b127922197       kube-proxy-pcbvp
	b80ee304bde37       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080                                           6 minutes ago       Running             kube-controller-manager     0                   35ecfba1db612       kube-controller-manager-old-k8s-version-406673
	f6539ef58f9e0       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99                                           6 minutes ago       Running             kube-apiserver              0                   b270c3a332bfb       kube-apiserver-old-k8s-version-406673
	0516988d4d0e8       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899                                           6 minutes ago       Running             kube-scheduler              0                   e951ebd232405       kube-scheduler-old-k8s-version-406673
	7017b3108f0be       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                           6 minutes ago       Running             etcd                        0                   cea61840367ab       etcd-old-k8s-version-406673
	
	
	==> coredns [97fdc1e66b0e4090e756dc1d52c8fc143f1fed44431e3bc8e4525d9670f0ac84] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:38442 - 48402 "HINFO IN 8440324266966115617.7448481208015864567. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011622953s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:52772 - 30493 "HINFO IN 761927415616289072.1641658468185983910. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011782307s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-406673
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-406673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-406673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_41_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:41:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-406673
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:50:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:50:04 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:50:04 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:50:04 +0000   Mon, 16 Sep 2024 11:41:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:50:04 +0000   Mon, 16 Sep 2024 11:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-406673
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8ec375c2bd64b10897869c5d9453e9b
	  System UUID:                2d5bda39-09b0-43d0-95f9-1ff418499524
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-6xlgw                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m10s
	  kube-system                 etcd-old-k8s-version-406673                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m21s
	  kube-system                 kindnet-mjcgf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m10s
	  kube-system                 kube-apiserver-old-k8s-version-406673             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-controller-manager-old-k8s-version-406673    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-proxy-pcbvp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-scheduler-old-k8s-version-406673             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 metrics-server-9975d5f86-zkwwm                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         6m36s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-dxnqs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-h95rv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m22s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m22s                  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s                  kubelet     Node old-k8s-version-406673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s                  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m9s                   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m42s                  kubelet     Node old-k8s-version-406673 status is now: NodeReady
	  Normal  Starting                 6m12s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m12s (x8 over 6m12s)  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s (x8 over 6m12s)  kubelet     Node old-k8s-version-406673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s (x8 over 6m12s)  kubelet     Node old-k8s-version-406673 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m6s                   kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [7017b3108f0be5e7063ba3cab72d6a3d12e0c9097a48b857b70ed9e8a810a7e6] <==
	2024-09-16 11:46:10.136460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:20.136557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:30.136291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:40.136468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:46:50.136416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:00.136483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:10.136387 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:20.136454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:30.136397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:40.136445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:47:50.136438 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:00.136281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:10.136457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:20.136463 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:30.136398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:40.136622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:48:50.136499 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:00.136473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:10.136465 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:20.136455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:30.136556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:40.136366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:49:50.136454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:50:00.136435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:50:10.136399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:50:10 up  1:32,  0 users,  load average: 0.82, 0.67, 0.77
	Linux old-k8s-version-406673 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [368f05691339162be8a50c077f04d5a83e08d134b16554c872c758c4c8bfa5c4] <==
	I0916 11:48:09.894648       1 main.go:299] handling current node
	I0916 11:48:19.896699       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:19.896770       1 main.go:299] handling current node
	I0916 11:48:29.903313       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:29.903350       1 main.go:299] handling current node
	I0916 11:48:39.902037       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:39.902089       1 main.go:299] handling current node
	I0916 11:48:49.902224       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:49.902259       1 main.go:299] handling current node
	I0916 11:48:59.901463       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:48:59.901497       1 main.go:299] handling current node
	I0916 11:49:09.894629       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:09.894667       1 main.go:299] handling current node
	I0916 11:49:19.901410       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:19.901446       1 main.go:299] handling current node
	I0916 11:49:29.902764       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:29.902840       1 main.go:299] handling current node
	I0916 11:49:39.897630       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:39.897675       1 main.go:299] handling current node
	I0916 11:49:49.902771       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:49.902809       1 main.go:299] handling current node
	I0916 11:49:59.901445       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:49:59.901483       1 main.go:299] handling current node
	I0916 11:50:09.894527       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:50:09.894572       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f6539ef58f9e0098d3f8e56b2e6190c1ba683a0890675780fd17e1034798380d] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:47:06.454753       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:47:19.048852       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:47:19.048892       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:47:19.048899       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:47:51.990441       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:47:51.990488       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:47:51.990496       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:48:26.148825       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:48:26.148872       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:48:26.148880       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:49:04.618784       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:49:04.618875       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:49:04.618884       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:49:07.795909       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:49:07.795951       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:49:07.795960       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:49:39.782779       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:49:39.782826       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:49:39.782835       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:50:04.619128       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:50:04.619179       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:50:04.619186       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b80ee304bde37cb433b5180e52fe2983209d03e43264601570221c3b3b6d2e19] <==
	E0916 11:45:54.100705       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:46:01.855531       1 request.go:655] Throttling request took 1.048706818s, request: GET:https://192.168.103.2:8443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
	W0916 11:46:02.706861       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:46:24.600875       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:46:34.357365       1 request.go:655] Throttling request took 1.048487743s, request: GET:https://192.168.103.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0916 11:46:35.208330       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:46:55.102283       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:47:06.858592       1 request.go:655] Throttling request took 1.048648862s, request: GET:https://192.168.103.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0916 11:47:07.709483       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:47:25.603649       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:47:39.359703       1 request.go:655] Throttling request took 1.0487182s, request: GET:https://192.168.103.2:8443/apis/extensions/v1beta1?timeout=32s
	W0916 11:47:40.210574       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:47:56.105958       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:48:11.860709       1 request.go:655] Throttling request took 1.048808782s, request: GET:https://192.168.103.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0916 11:48:12.711817       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:48:26.607476       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:48:44.363089       1 request.go:655] Throttling request took 1.04855694s, request: GET:https://192.168.103.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0916 11:48:45.214378       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:48:57.109403       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:49:16.864750       1 request.go:655] Throttling request took 1.048355537s, request: GET:https://192.168.103.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0916 11:49:17.715986       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:49:27.611052       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:49:49.366266       1 request.go:655] Throttling request took 1.048549685s, request: GET:https://192.168.103.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0916 11:49:50.217118       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:49:58.112781       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [5685724a36b6b1d9ba29c96663018c8fd0f8fb9929e13a8d3e82cf8cdd52f849] <==
	I0916 11:42:00.995500       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:42:00.995590       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:42:01.010731       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:42:01.010826       1 server_others.go:185] Using iptables Proxier.
	I0916 11:42:01.012001       1 server.go:650] Version: v1.20.0
	I0916 11:42:01.013499       1 config.go:315] Starting service config controller
	I0916 11:42:01.013577       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:42:01.013592       1 config.go:224] Starting endpoint slice config controller
	I0916 11:42:01.013614       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:42:01.113797       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:42:01.113806       1 shared_informer.go:247] Caches are synced for service config 
	I0916 11:44:04.717629       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:44:04.717947       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:44:04.731733       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:44:04.731957       1 server_others.go:185] Using iptables Proxier.
	I0916 11:44:04.732297       1 server.go:650] Version: v1.20.0
	I0916 11:44:04.732738       1 config.go:315] Starting service config controller
	I0916 11:44:04.732748       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:44:04.795276       1 config.go:224] Starting endpoint slice config controller
	I0916 11:44:04.795305       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:44:04.833853       1 shared_informer.go:247] Caches are synced for service config 
	I0916 11:44:04.895480       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [0516988d4d0e8e8b0242318ba092db8dcee5d2aafef2700d9f1f184a61024d6f] <==
	E0916 11:41:40.593689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:40.593833       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:41:40.594045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:41:40.594338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:41:40.594501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:40.594699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:41:40.594858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:41:40.595116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:41:40.595261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:41:40.595399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:41:41.428933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:41:41.508045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.594591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:41:41.695406       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0916 11:41:44.916550       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0916 11:43:59.643776       1 serving.go:331] Generated self-signed cert in-memory
	W0916 11:44:03.607340       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:44:03.607469       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:44:03.607489       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:44:03.607496       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:44:03.800839       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:44:03.801676       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:44:03.803024       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:44:03.801704       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0916 11:44:03.903482       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:48:30 old-k8s-version-406673 kubelet[1237]: E0916 11:48:30.206056    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: I0916 11:48:33.205171    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:48:33 old-k8s-version-406673 kubelet[1237]: E0916 11:48:33.205579    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: I0916 11:48:44.205439    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:48:44 old-k8s-version-406673 kubelet[1237]: E0916 11:48:44.205863    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:48:45 old-k8s-version-406673 kubelet[1237]: E0916 11:48:45.206382    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: I0916 11:48:56.205223    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:48:56 old-k8s-version-406673 kubelet[1237]: E0916 11:48:56.205608    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:48:58 old-k8s-version-406673 kubelet[1237]: E0916 11:48:58.202657    1237 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b, memory: /docker/28d6c5fc26a9bf075525bddeaf7ee6e3b693d05200b798e78f62e8f736f0aa2b/system.slice/kubelet.service
	Sep 16 11:48:59 old-k8s-version-406673 kubelet[1237]: E0916 11:48:59.206076    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: I0916 11:49:10.205202    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:10 old-k8s-version-406673 kubelet[1237]: E0916 11:49:10.205596    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:13 old-k8s-version-406673 kubelet[1237]: E0916 11:49:13.206081    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: I0916 11:49:21.205094    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:21 old-k8s-version-406673 kubelet[1237]: E0916 11:49:21.205412    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:27 old-k8s-version-406673 kubelet[1237]: E0916 11:49:27.206162    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: I0916 11:49:35.205223    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:35 old-k8s-version-406673 kubelet[1237]: E0916 11:49:35.205643    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:39 old-k8s-version-406673 kubelet[1237]: E0916 11:49:39.206202    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:49:50 old-k8s-version-406673 kubelet[1237]: I0916 11:49:50.205407    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:49:50 old-k8s-version-406673 kubelet[1237]: E0916 11:49:50.205809    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	Sep 16 11:49:50 old-k8s-version-406673 kubelet[1237]: E0916 11:49:50.206134    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:50:02 old-k8s-version-406673 kubelet[1237]: E0916 11:50:02.206205    1237 pod_workers.go:191] Error syncing pod cc94e6fe-629d-4146-adc6-f32166bf5081 ("metrics-server-9975d5f86-zkwwm_kube-system(cc94e6fe-629d-4146-adc6-f32166bf5081)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:50:04 old-k8s-version-406673 kubelet[1237]: I0916 11:50:04.205183    1237 scope.go:95] [topologymanager] RemoveContainer - Container ID: 16a2fe5b8b22e6ceccb8f5a94f9a88d14c998a92233d8311b183f4a57fb1a891
	Sep 16 11:50:04 old-k8s-version-406673 kubelet[1237]: E0916 11:50:04.205655    1237 pod_workers.go:191] Error syncing pod 1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5 ("dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dxnqs_kubernetes-dashboard(1e7e06de-77e1-47a5-8ecf-e2c06b3d28c5)"
	
	
	==> kubernetes-dashboard [97a484780a3568dabac8102b40f6b80961e35853964ca0e228b5af78980e7fdf] <==
	2024/09/16 11:44:28 Starting overwatch
	2024/09/16 11:44:28 Using namespace: kubernetes-dashboard
	2024/09/16 11:44:28 Using in-cluster config to connect to apiserver
	2024/09/16 11:44:28 Using secret token for csrf signing
	2024/09/16 11:44:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:44:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:44:28 Successful initial request to the apiserver, version: v1.20.0
	2024/09/16 11:44:28 Generating JWE encryption key
	2024/09/16 11:44:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:44:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:44:29 Initializing JWE encryption key from synchronized object
	2024/09/16 11:44:29 Creating in-cluster Sidecar client
	2024/09/16 11:44:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:44:29 Serving insecurely on HTTP port: 9090
	2024/09/16 11:44:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:45:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:45:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:46:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:46:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:47:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:47:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:48:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:48:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:49:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:49:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5847ee074474b71cb40f1118ff4d309073a9c78fb9dcae92e173265a2c889ccd] <==
	I0916 11:42:33.942881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:42:33.952289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:42:33.952327       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:42:33.995195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:42:33.995263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88c65391-c353-4f97-bac8-9bd49b9f0588", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77 became leader
	I0916 11:42:33.995326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	I0916 11:42:34.095721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_12c2e137-b462-4df7-95ef-c21c07c91d77!
	I0916 11:44:05.490838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:44:05.500843       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:44:05.500889       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:44:22.921932       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:44:22.922027       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88c65391-c353-4f97-bac8-9bd49b9f0588", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-406673_3ba9e9fb-376f-4c9d-ac7a-117467cbcd44 became leader
	I0916 11:44:22.922079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_3ba9e9fb-376f-4c9d-ac7a-117467cbcd44!
	I0916 11:44:23.022817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-406673_3ba9e9fb-376f-4c9d-ac7a-117467cbcd44!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (507.324µs)
helpers_test.go:263: kubectl --context old-k8s-version-406673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-179932 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-179932 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (698.693µs)
start_stop_delete_test.go:196: kubectl --context no-preload-179932 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-179932
helpers_test.go:235: (dbg) docker inspect no-preload-179932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db",
	        "Created": "2024-09-16T11:50:18.324141753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354317,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:50:18.460923195Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hostname",
	        "HostsPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hosts",
	        "LogPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db-json.log",
	        "Name": "/no-preload-179932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-179932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-179932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-179932",
	                "Source": "/var/lib/docker/volumes/no-preload-179932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-179932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-179932",
	                "name.minikube.sigs.k8s.io": "no-preload-179932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7cd51b56ae0e7b9c36d315b4ce9fb777c38e910770cfb5f1f448c928dadda05",
	            "SandboxKey": "/var/run/docker/netns/a7cd51b56ae0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-179932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3318c5c795cbdaf6a4546ff9f05fc1f3534565776857632d9afa204a3c5ca91f",
	                    "EndpointID": "1762fc6325de440c55f237e57f8ef1680b848810c568c35778055aedb3d79112",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-179932",
	                        "33415cb7fa83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-179932 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-179932 logs -n 25: (1.097654395s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                  |                              |         |         |                     |                     |
	|         | containerd --all --full                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                          |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467        | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:50:17
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:50:17.261646  353745 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:50:17.261961  353745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:50:17.261974  353745 out.go:358] Setting ErrFile to fd 2...
	I0916 11:50:17.261981  353745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:50:17.262273  353745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:50:17.263118  353745 out.go:352] Setting JSON to false
	I0916 11:50:17.264280  353745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5557,"bootTime":1726481860,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:50:17.264369  353745 start.go:139] virtualization: kvm guest
	I0916 11:50:17.267026  353745 out.go:177] * [no-preload-179932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:50:17.268879  353745 notify.go:220] Checking for updates...
	I0916 11:50:17.268946  353745 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:50:17.270731  353745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:50:17.272238  353745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:50:17.273551  353745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:50:17.275161  353745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:50:17.276866  353745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:50:17.279205  353745 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279359  353745 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279497  353745 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279614  353745 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:50:17.307569  353745 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:50:17.307662  353745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:50:17.364583  353745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:50:17.353613217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:50:17.364687  353745 docker.go:318] overlay module found
	I0916 11:50:17.367827  353745 out.go:177] * Using the docker driver based on user configuration
	I0916 11:50:17.369319  353745 start.go:297] selected driver: docker
	I0916 11:50:17.369364  353745 start.go:901] validating driver "docker" against <nil>
	I0916 11:50:17.369380  353745 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:50:17.370517  353745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:50:17.426383  353745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:50:17.415784753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:50:17.426604  353745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:50:17.426824  353745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:50:17.428784  353745 out.go:177] * Using Docker driver with root privileges
	I0916 11:50:17.430291  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:17.430351  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:17.430360  353745 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:50:17.430422  353745 start.go:340] cluster config:
	{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:50:17.432336  353745 out.go:177] * Starting "no-preload-179932" primary control-plane node in "no-preload-179932" cluster
	I0916 11:50:17.434034  353745 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:50:17.435683  353745 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:50:17.436991  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:50:17.437122  353745 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:50:17.437157  353745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:50:17.437183  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json: {Name:mkc16156d5a07d416da64f9d96a3502b09dcbb6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:17.437384  353745 cache.go:107] acquiring lock: {Name:mk871ae736ce09ba2b4421598649b9ecfc9a98bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437387  353745 cache.go:107] acquiring lock: {Name:mk8b23bbceb92ce965299065ca3d25050387467b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437413  353745 cache.go:107] acquiring lock: {Name:mk0d227841b16d1443985320c46c5945df5de856 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437384  353745 cache.go:107] acquiring lock: {Name:mkc9fa4e48807b59cdf7eefb19d5245546dc831d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437456  353745 cache.go:107] acquiring lock: {Name:mkf3f21a53f01d1ee0608b28c94cf582dc8c355f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437403  353745 cache.go:107] acquiring lock: {Name:mk540470437675d9c95f2acaf015b6015148e24f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437530  353745 cache.go:107] acquiring lock: {Name:mkbb0d7522afd30851ddf834442136fb3567a26a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437558  353745 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:50:17.437616  353745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:17.437629  353745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:17.437676  353745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:17.437698  353745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:17.437787  353745 cache.go:107] acquiring lock: {Name:mkfcf90f9df5885fe87d6ff86cdb7f8f58dec344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437843  353745 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:50:17.437856  353745 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 477.041µs
	I0916 11:50:17.437874  353745 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:50:17.437894  353745 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:17.437975  353745 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:17.439129  353745 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:50:17.439139  353745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:17.439178  353745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:17.439228  353745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:17.439303  353745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:17.439442  353745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:17.439509  353745 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	W0916 11:50:17.465435  353745 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:50:17.465457  353745 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:50:17.465523  353745 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:50:17.465535  353745 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:50:17.465539  353745 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:50:17.465546  353745 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:50:17.465551  353745 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:50:17.540421  353745 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:50:17.540482  353745 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:50:17.540523  353745 start.go:360] acquireMachinesLock for no-preload-179932: {Name:mkd475c3f7aed9017143023aeb4fceb62fe6c60d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.540666  353745 start.go:364] duration metric: took 116.626µs to acquireMachinesLock for "no-preload-179932"
	I0916 11:50:17.540697  353745 start.go:93] Provisioning new machine with config: &{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:50:17.540799  353745 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:50:17.543760  353745 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:50:17.544066  353745 start.go:159] libmachine.API.Create for "no-preload-179932" (driver="docker")
	I0916 11:50:17.544097  353745 client.go:168] LocalClient.Create starting
	I0916 11:50:17.544177  353745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:50:17.544211  353745 main.go:141] libmachine: Decoding PEM data...
	I0916 11:50:17.544230  353745 main.go:141] libmachine: Parsing certificate...
	I0916 11:50:17.544292  353745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:50:17.544320  353745 main.go:141] libmachine: Decoding PEM data...
	I0916 11:50:17.544336  353745 main.go:141] libmachine: Parsing certificate...
	I0916 11:50:17.544768  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:50:17.563971  353745 cli_runner.go:211] docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:50:17.564043  353745 network_create.go:284] running [docker network inspect no-preload-179932] to gather additional debugging logs...
	I0916 11:50:17.564060  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932
	W0916 11:50:17.581522  353745 cli_runner.go:211] docker network inspect no-preload-179932 returned with exit code 1
	I0916 11:50:17.581552  353745 network_create.go:287] error running [docker network inspect no-preload-179932]: docker network inspect no-preload-179932: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-179932 not found
	I0916 11:50:17.581569  353745 network_create.go:289] output of [docker network inspect no-preload-179932]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-179932 not found
	
	** /stderr **
	I0916 11:50:17.581662  353745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:50:17.600809  353745 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:50:17.601729  353745 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:50:17.602523  353745 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:50:17.603150  353745 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:50:17.603787  353745 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:50:17.604419  353745 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:50:17.605797  353745 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039cfe0}
	I0916 11:50:17.605828  353745 network_create.go:124] attempt to create docker network no-preload-179932 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:50:17.605872  353745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-179932 no-preload-179932
	I0916 11:50:17.676431  353745 network_create.go:108] docker network no-preload-179932 192.168.103.0/24 created
	I0916 11:50:17.676472  353745 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-179932" container
	I0916 11:50:17.676527  353745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:50:17.695151  353745 cli_runner.go:164] Run: docker volume create no-preload-179932 --label name.minikube.sigs.k8s.io=no-preload-179932 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:50:17.716208  353745 oci.go:103] Successfully created a docker volume no-preload-179932
	I0916 11:50:17.716280  353745 cli_runner.go:164] Run: docker run --rm --name no-preload-179932-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-179932 --entrypoint /usr/bin/test -v no-preload-179932:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:50:17.982139  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:50:18.004879  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:50:18.032231  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:50:18.062798  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:50:18.064953  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:50:18.071480  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:50:18.072209  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:50:18.157840  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:50:18.157871  353745 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 720.488492ms
	I0916 11:50:18.157891  353745 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:50:18.244108  353745 oci.go:107] Successfully prepared a docker volume no-preload-179932
	I0916 11:50:18.244138  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	W0916 11:50:18.244297  353745 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:50:18.244412  353745 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:50:18.303137  353745 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-179932 --name no-preload-179932 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-179932 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-179932 --network no-preload-179932 --ip 192.168.103.2 --volume no-preload-179932:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:50:18.643596  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Running}}
	I0916 11:50:18.667792  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.688027  353745 cli_runner.go:164] Run: docker exec no-preload-179932 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:50:18.735261  353745 oci.go:144] the created container "no-preload-179932" has a running status.
	I0916 11:50:18.735326  353745 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa...
	I0916 11:50:18.766733  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:50:18.766766  353745 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 1.329386554s
	I0916 11:50:18.766783  353745 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:50:18.853467  353745 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:50:18.875421  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.894347  353745 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:50:18.894368  353745 kic_runner.go:114] Args: [docker exec --privileged no-preload-179932 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:50:18.942980  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.964524  353745 machine.go:93] provisionDockerMachine start ...
	I0916 11:50:18.964628  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:18.985177  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:18.985626  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:18.985648  353745 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:50:18.986437  353745 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52304->127.0.0.1:33098: read: connection reset by peer
	I0916 11:50:20.352937  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:50:20.352965  353745 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.91554704s
	I0916 11:50:20.352978  353745 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:50:20.375094  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:50:20.375146  353745 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.93769009s
	I0916 11:50:20.375162  353745 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:50:20.404338  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:50:20.404368  353745 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 2.967049618s
	I0916 11:50:20.404383  353745 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:50:20.440630  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:50:20.440662  353745 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.002881935s
	I0916 11:50:20.440675  353745 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:50:20.758418  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:50:20.758445  353745 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 3.321045606s
	I0916 11:50:20.758457  353745 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:50:20.758473  353745 cache.go:87] Successfully saved all images to host disk.
	I0916 11:50:22.121000  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:50:22.121029  353745 ubuntu.go:169] provisioning hostname "no-preload-179932"
	I0916 11:50:22.121084  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.139064  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.139265  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.139281  353745 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-179932 && echo "no-preload-179932" | sudo tee /etc/hostname
	I0916 11:50:22.285481  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:50:22.285587  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.303430  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.303635  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.303653  353745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-179932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-179932/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-179932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:50:22.441654  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:50:22.441687  353745 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:50:22.441713  353745 ubuntu.go:177] setting up certificates
	I0916 11:50:22.441726  353745 provision.go:84] configureAuth start
	I0916 11:50:22.441784  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:22.459186  353745 provision.go:143] copyHostCerts
	I0916 11:50:22.459247  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:50:22.459254  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:50:22.459318  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:50:22.459401  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:50:22.459412  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:50:22.459436  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:50:22.459501  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:50:22.459509  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:50:22.459529  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:50:22.459579  353745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.no-preload-179932 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-179932]
	I0916 11:50:22.604596  353745 provision.go:177] copyRemoteCerts
	I0916 11:50:22.604661  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:50:22.604696  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.623335  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:22.722150  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:50:22.744937  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:50:22.767660  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:50:22.790813  353745 provision.go:87] duration metric: took 349.073566ms to configureAuth
	I0916 11:50:22.790843  353745 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:50:22.791022  353745 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:22.791130  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.809366  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.809570  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.809594  353745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:50:23.037925  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:50:23.037948  353745 machine.go:96] duration metric: took 4.073399787s to provisionDockerMachine
	I0916 11:50:23.037960  353745 client.go:171] duration metric: took 5.493852423s to LocalClient.Create
	I0916 11:50:23.037983  353745 start.go:167] duration metric: took 5.493918053s to libmachine.API.Create "no-preload-179932"
	I0916 11:50:23.037991  353745 start.go:293] postStartSetup for "no-preload-179932" (driver="docker")
	I0916 11:50:23.038043  353745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:50:23.038130  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:50:23.038173  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.057110  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.155780  353745 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:50:23.158999  353745 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:50:23.159029  353745 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:50:23.159036  353745 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:50:23.159042  353745 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:50:23.159052  353745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:50:23.159108  353745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:50:23.159178  353745 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:50:23.159265  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:50:23.168631  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:50:23.191792  353745 start.go:296] duration metric: took 153.784247ms for postStartSetup
	I0916 11:50:23.192189  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:23.210469  353745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:50:23.210780  353745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:50:23.210826  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.228693  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.322250  353745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:50:23.326606  353745 start.go:128] duration metric: took 5.78575133s to createHost
	I0916 11:50:23.326630  353745 start.go:83] releasing machines lock for "no-preload-179932", held for 5.785949248s
	I0916 11:50:23.326688  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:23.345016  353745 ssh_runner.go:195] Run: cat /version.json
	I0916 11:50:23.345063  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.345140  353745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:50:23.345213  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.364213  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.365476  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.539384  353745 ssh_runner.go:195] Run: systemctl --version
	I0916 11:50:23.544045  353745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:50:23.682500  353745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:50:23.686822  353745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:50:23.705505  353745 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:50:23.705596  353745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:50:23.735375  353745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:50:23.735406  353745 start.go:495] detecting cgroup driver to use...
	I0916 11:50:23.735443  353745 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:50:23.735487  353745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:50:23.751165  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:50:23.762367  353745 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:50:23.762424  353745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:50:23.776422  353745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:50:23.790314  353745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:50:23.871070  353745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:50:23.955641  353745 docker.go:233] disabling docker service ...
	I0916 11:50:23.955704  353745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:50:23.974798  353745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:50:23.986320  353745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:50:24.066055  353745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:50:24.154083  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:50:24.165011  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:50:24.180586  353745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:50:24.180688  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.189971  353745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:50:24.190024  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.199843  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.209792  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.219702  353745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:50:24.228365  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.237703  353745 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.252615  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.261804  353745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:50:24.269676  353745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:50:24.278212  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:24.351610  353745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:50:24.760310  353745 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:50:24.760392  353745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:50:24.763747  353745 start.go:563] Will wait 60s for crictl version
	I0916 11:50:24.763819  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:24.767047  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:50:24.799325  353745 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:50:24.799407  353745 ssh_runner.go:195] Run: crio --version
	I0916 11:50:24.833821  353745 ssh_runner.go:195] Run: crio --version
	I0916 11:50:24.872021  353745 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:50:24.873644  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:50:24.890696  353745 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:50:24.894309  353745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:50:24.905242  353745 kubeadm.go:883] updating cluster {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:50:24.905402  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:50:24.905459  353745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:50:24.938604  353745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:50:24.938629  353745 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:50:24.938703  353745 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:24.938734  353745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:24.938778  353745 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:24.938807  353745 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:50:24.938828  353745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:24.938854  353745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:24.938794  353745 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:24.938984  353745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:24.939961  353745 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:24.939978  353745 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:24.940164  353745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:24.940207  353745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:24.940241  353745 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:50:24.940248  353745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:24.940172  353745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:24.940170  353745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.118753  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.154474  353745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0916 11:50:25.154512  353745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.154548  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.157855  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.162753  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.167885  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.174842  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.177553  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0916 11:50:25.199771  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.199957  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.270508  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.296799  353745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0916 11:50:25.296844  353745 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0916 11:50:25.296908  353745 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.296933  353745 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0916 11:50:25.296853  353745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.296965  353745 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.296980  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.296993  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.297001  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.297054  353745 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0916 11:50:25.297079  353745 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0916 11:50:25.297108  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.320461  353745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0916 11:50:25.320506  353745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.320553  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.320578  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.333783  353745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0916 11:50:25.333833  353745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.333854  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.333872  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.333870  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.333904  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.333948  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.333962  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.414304  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:50:25.414412  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:25.504551  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.504652  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.504665  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.504697  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.504743  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.504760  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.504802  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.1': No such file or directory
	I0916 11:50:25.504831  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 --> /var/lib/minikube/images/kube-scheduler_v1.31.1 (20187136 bytes)
	I0916 11:50:25.715489  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.715508  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.715538  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.715600  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.715604  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.715659  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.913649  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:50:25.913683  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.913700  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:50:25.913708  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:50:25.913757  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0916 11:50:25.913757  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:50:25.913785  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:25.913799  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:25.913659  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:50:25.913838  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:25.913889  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928792  353745 retry.go:31] will retry after 284.043253ms: ssh: rejected: connect failed (open failed)
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928820  353745 retry.go:31] will retry after 206.277714ms: ssh: rejected: connect failed (open failed)
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928832  353745 retry.go:31] will retry after 258.129273ms: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.955883  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.1': No such file or directory
	I0916 11:50:25.955923  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 --> /var/lib/minikube/images/kube-proxy_v1.31.1 (30214144 bytes)
	I0916 11:50:25.955990  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:25.955998  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I0916 11:50:25.956027  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I0916 11:50:25.956080  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:25.979690  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:25.980957  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.009367  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:26.009427  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.015683  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:50:26.015784  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:26.015850  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.020816  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:26.020879  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:26.020938  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.035542  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.037133  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.041968  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.219884  353745 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:50:26.219941  353745 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:26.219994  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:26.219941  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.1': No such file or directory
	I0916 11:50:26.220069  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 --> /var/lib/minikube/images/kube-controller-manager_v1.31.1 (26231808 bytes)
	I0916 11:50:28.111335  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.090425901s)
	I0916 11:50:28.111372  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0916 11:50:28.111392  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:28.111394  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.197583966s)
	I0916 11:50:28.111426  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0916 11:50:28.111436  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:28.111440  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: (2.197664353s)
	I0916 11:50:28.111456  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0916 11:50:28.111476  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0916 11:50:28.111454  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0916 11:50:28.111523  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.197610351s)
	I0916 11:50:28.111565  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.1': No such file or directory
	I0916 11:50:28.111596  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 --> /var/lib/minikube/images/kube-apiserver_v1.31.1 (28057088 bytes)
	I0916 11:50:28.111571  353745 ssh_runner.go:235] Completed: which crictl: (1.891560983s)
	I0916 11:50:28.111720  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:29.915246  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.803785881s)
	I0916 11:50:29.915276  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0916 11:50:29.915301  353745 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:29.915321  353745 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.803577324s)
	I0916 11:50:29.915347  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:29.915396  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:32.399830  353745 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.48440876s)
	I0916 11:50:32.399928  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:32.399839  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (2.484470985s)
	I0916 11:50:32.399960  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0916 11:50:32.399988  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:32.400032  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:32.436189  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:50:32.436293  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:33.746085  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.309767608s)
	I0916 11:50:33.746123  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:50:33.746085  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.346024308s)
	I0916 11:50:33.746143  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:50:33.746147  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0916 11:50:33.746168  353745 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0916 11:50:33.746219  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0916 11:50:33.886742  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0916 11:50:33.886791  353745 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:33.886847  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:35.329396  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.442524266s)
	I0916 11:50:35.329425  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0916 11:50:35.329448  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:50:35.329494  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:50:36.770428  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.440905892s)
	I0916 11:50:36.770458  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0916 11:50:36.770484  353745 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:36.770529  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:37.409584  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:50:37.409619  353745 cache_images.go:123] Successfully loaded all cached images
	I0916 11:50:37.409625  353745 cache_images.go:92] duration metric: took 12.470984002s to LoadCachedImages
	I0916 11:50:37.409637  353745 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 11:50:37.409719  353745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-179932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:50:37.409783  353745 ssh_runner.go:195] Run: crio config
	I0916 11:50:37.452066  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:37.452086  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:37.452097  353745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:50:37.452115  353745 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-179932 NodeName:no-preload-179932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:50:37.452287  353745 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-179932"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:50:37.452356  353745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:50:37.461638  353745 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:50:37.461710  353745 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:50:37.469780  353745 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:50:37.469859  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:50:37.469894  353745 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 11:50:37.469905  353745 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 11:50:37.473264  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:50:37.473298  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:50:38.361857  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:50:38.365559  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:50:38.365594  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:50:38.493699  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:50:38.508908  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:50:38.512321  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:50:38.512350  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:50:38.676578  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:50:38.685326  353745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0916 11:50:38.701489  353745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:50:38.718627  353745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0916 11:50:38.735122  353745 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:50:38.738342  353745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:50:38.748252  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:38.827198  353745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:50:38.840338  353745 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932 for IP: 192.168.103.2
	I0916 11:50:38.840364  353745 certs.go:194] generating shared ca certs ...
	I0916 11:50:38.840393  353745 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.840560  353745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:50:38.840615  353745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:50:38.840627  353745 certs.go:256] generating profile certs ...
	I0916 11:50:38.840704  353745 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key
	I0916 11:50:38.840723  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt with IP's: []
	I0916 11:50:38.935911  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt ...
	I0916 11:50:38.935940  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: {Name:mkcfebd0395ea27149b681830fddcbfa0b287805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.936111  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key ...
	I0916 11:50:38.936122  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key: {Name:mkedb064e2171125bc65687de4300740d0c5fa5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.936197  353745 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391
	I0916 11:50:38.936211  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:50:39.161110  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 ...
	I0916 11:50:39.161163  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391: {Name:mk6e55865c08038f9c83c62a1e3de8ab46e37505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.161381  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391 ...
	I0916 11:50:39.161403  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391: {Name:mk7fa07a5319463f001b0ea91f26d16d256d3f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.161513  353745 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt
	I0916 11:50:39.161622  353745 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key
	I0916 11:50:39.161703  353745 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key
	I0916 11:50:39.161726  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt with IP's: []
	I0916 11:50:39.230589  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt ...
	I0916 11:50:39.230621  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt: {Name:mk9382a33ca50c5dc46808284f9e12b01271ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.230825  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key ...
	I0916 11:50:39.230843  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key: {Name:mk92c148096f3309b2fe7cab24919949c9166c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.231071  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:50:39.231123  353745 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:50:39.231142  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:50:39.231171  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:50:39.231206  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:50:39.231238  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:50:39.231294  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:50:39.231970  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:50:39.254719  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:50:39.277272  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:50:39.299028  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:50:39.321434  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:50:39.343976  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:50:39.367682  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:50:39.389857  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:50:39.411764  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:50:39.434314  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:50:39.455995  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:50:39.478225  353745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:50:39.493981  353745 ssh_runner.go:195] Run: openssl version
	I0916 11:50:39.498988  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:50:39.507998  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.511432  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.511491  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.518178  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:50:39.528049  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:50:39.538529  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.542466  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.542525  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.550361  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:50:39.559880  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:50:39.569042  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.572563  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.572616  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.578893  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:50:39.587606  353745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:50:39.590786  353745 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:50:39.590838  353745 kubeadm.go:392] StartCluster: {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:50:39.590919  353745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:50:39.590962  353745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:50:39.623993  353745 cri.go:89] found id: ""
	I0916 11:50:39.624065  353745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:50:39.632782  353745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:50:39.641165  353745 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:50:39.641220  353745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:50:39.649467  353745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:50:39.649485  353745 kubeadm.go:157] found existing configuration files:
	
	I0916 11:50:39.649526  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:50:39.657545  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:50:39.657603  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:50:39.665725  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:50:39.674189  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:50:39.674239  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:50:39.681997  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:50:39.690004  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:50:39.690062  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:50:39.697984  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:50:39.706536  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:50:39.706602  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:50:39.714682  353745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:50:39.749285  353745 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:50:39.749390  353745 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:50:39.766004  353745 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:50:39.766125  353745 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:50:39.766178  353745 kubeadm.go:310] OS: Linux
	I0916 11:50:39.766223  353745 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:50:39.766282  353745 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:50:39.766324  353745 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:50:39.766369  353745 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:50:39.766430  353745 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:50:39.766507  353745 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:50:39.766575  353745 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:50:39.766639  353745 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:50:39.766706  353745 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:50:39.816683  353745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:50:39.816778  353745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:50:39.816904  353745 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:50:39.829767  353745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:50:39.833943  353745 out.go:235]   - Generating certificates and keys ...
	I0916 11:50:39.834055  353745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:50:39.834121  353745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:50:39.912342  353745 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:50:39.981611  353745 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:50:40.100442  353745 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:50:40.353713  353745 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:50:40.529814  353745 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:50:40.529974  353745 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-179932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:50:40.662396  353745 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:50:40.662532  353745 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-179932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:50:40.978365  353745 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:50:41.089411  353745 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:50:41.246484  353745 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:50:41.246591  353745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:50:41.338255  353745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:50:41.520493  353745 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:50:41.631124  353745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:50:41.869980  353745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:50:42.120470  353745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:50:42.121129  353745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:50:42.123645  353745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:50:42.125750  353745 out.go:235]   - Booting up control plane ...
	I0916 11:50:42.125883  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:50:42.125983  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:50:42.126071  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:50:42.136142  353745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:50:42.141313  353745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:50:42.141405  353745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:50:42.219091  353745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:50:42.219242  353745 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:50:42.720318  353745 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.359131ms
	I0916 11:50:42.720396  353745 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:50:47.221859  353745 kubeadm.go:310] [api-check] The API server is healthy after 4.501530278s
	I0916 11:50:47.232717  353745 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:50:47.243418  353745 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:50:47.260829  353745 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:50:47.261089  353745 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-179932 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:50:47.268219  353745 kubeadm.go:310] [bootstrap-token] Using token: wbzbzb.swi91qeomz7323fx
	I0916 11:50:47.270698  353745 out.go:235]   - Configuring RBAC rules ...
	I0916 11:50:47.270836  353745 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:50:47.273506  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:50:47.279257  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:50:47.281945  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:50:47.284450  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:50:47.288148  353745 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:50:47.628407  353745 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:50:48.046362  353745 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:50:48.628627  353745 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:50:48.629544  353745 kubeadm.go:310] 
	I0916 11:50:48.629646  353745 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:50:48.629658  353745 kubeadm.go:310] 
	I0916 11:50:48.629750  353745 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:50:48.629775  353745 kubeadm.go:310] 
	I0916 11:50:48.629834  353745 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:50:48.629927  353745 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:50:48.630007  353745 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:50:48.630018  353745 kubeadm.go:310] 
	I0916 11:50:48.630095  353745 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:50:48.630105  353745 kubeadm.go:310] 
	I0916 11:50:48.630171  353745 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:50:48.630180  353745 kubeadm.go:310] 
	I0916 11:50:48.630257  353745 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:50:48.630344  353745 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:50:48.630458  353745 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:50:48.630473  353745 kubeadm.go:310] 
	I0916 11:50:48.630589  353745 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:50:48.630728  353745 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:50:48.630737  353745 kubeadm.go:310] 
	I0916 11:50:48.630851  353745 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wbzbzb.swi91qeomz7323fx \
	I0916 11:50:48.631029  353745 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:50:48.631080  353745 kubeadm.go:310] 	--control-plane 
	I0916 11:50:48.631097  353745 kubeadm.go:310] 
	I0916 11:50:48.631194  353745 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:50:48.631209  353745 kubeadm.go:310] 
	I0916 11:50:48.631311  353745 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wbzbzb.swi91qeomz7323fx \
	I0916 11:50:48.631477  353745 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:50:48.632992  353745 kubeadm.go:310] W0916 11:50:39.746676    2273 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:50:48.633284  353745 kubeadm.go:310] W0916 11:50:39.747329    2273 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:50:48.633518  353745 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:50:48.633654  353745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:50:48.633668  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:48.633678  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:48.636566  353745 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:50:48.638084  353745 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:50:48.642054  353745 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:50:48.642074  353745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:50:48.659841  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:50:48.854859  353745 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:50:48.854907  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:48.854934  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-179932 minikube.k8s.io/updated_at=2024_09_16T11_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=no-preload-179932 minikube.k8s.io/primary=true
	I0916 11:50:48.862914  353745 ops.go:34] apiserver oom_adj: -16
	I0916 11:50:48.947264  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:49.447477  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:49.948030  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:50.448348  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:50.947452  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:51.447333  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:51.947456  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:52.447460  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:52.948258  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:53.018251  353745 kubeadm.go:1113] duration metric: took 4.163399098s to wait for elevateKubeSystemPrivileges
	I0916 11:50:53.018293  353745 kubeadm.go:394] duration metric: took 13.427458529s to StartCluster
	I0916 11:50:53.018313  353745 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:53.018394  353745 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:50:53.019749  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:53.019996  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:50:53.020006  353745 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:50:53.020089  353745 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:50:53.020185  353745 addons.go:69] Setting storage-provisioner=true in profile "no-preload-179932"
	I0916 11:50:53.020206  353745 addons.go:69] Setting default-storageclass=true in profile "no-preload-179932"
	I0916 11:50:53.020229  353745 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-179932"
	I0916 11:50:53.020239  353745 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:53.020210  353745 addons.go:234] Setting addon storage-provisioner=true in "no-preload-179932"
	I0916 11:50:53.020316  353745 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:50:53.020631  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.020797  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.022059  353745 out.go:177] * Verifying Kubernetes components...
	I0916 11:50:53.023597  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:53.044946  353745 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:53.045147  353745 addons.go:234] Setting addon default-storageclass=true in "no-preload-179932"
	I0916 11:50:53.045190  353745 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:50:53.045672  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.046362  353745 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:50:53.046382  353745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:50:53.046420  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:53.067056  353745 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:50:53.067095  353745 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:50:53.067169  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:53.076321  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:53.088844  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:53.210469  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:50:53.312446  353745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:50:53.323161  353745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:50:53.416369  353745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:50:53.603739  353745 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:50:53.605015  353745 node_ready.go:35] waiting up to 6m0s for node "no-preload-179932" to be "Ready" ...
	I0916 11:50:53.841034  353745 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:50:53.842341  353745 addons.go:510] duration metric: took 822.2633ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:50:54.107859  353745 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-179932" context rescaled to 1 replicas
	I0916 11:50:55.608268  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:50:57.608902  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:00.108773  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:02.608151  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:04.608412  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:07.108282  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:09.108982  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:10.608730  353745 node_ready.go:49] node "no-preload-179932" has status "Ready":"True"
	I0916 11:51:10.608754  353745 node_ready.go:38] duration metric: took 17.003714881s for node "no-preload-179932" to be "Ready" ...
	I0916 11:51:10.608765  353745 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:51:10.615200  353745 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.120543  353745 pod_ready.go:93] pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.120590  353745 pod_ready.go:82] duration metric: took 505.366914ms for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.120600  353745 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.124478  353745 pod_ready.go:93] pod "etcd-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.124499  353745 pod_ready.go:82] duration metric: took 3.891956ms for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.124510  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.128756  353745 pod_ready.go:93] pod "kube-apiserver-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.128778  353745 pod_ready.go:82] duration metric: took 4.260684ms for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.128790  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.132774  353745 pod_ready.go:93] pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.132795  353745 pod_ready.go:82] duration metric: took 3.997805ms for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.132806  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.409098  353745 pod_ready.go:93] pod "kube-proxy-ckd46" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.409126  353745 pod_ready.go:82] duration metric: took 276.310033ms for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.409139  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.809415  353745 pod_ready.go:93] pod "kube-scheduler-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.809441  353745 pod_ready.go:82] duration metric: took 400.294201ms for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.809456  353745 pod_ready.go:39] duration metric: took 1.200676939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:51:11.809472  353745 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:51:11.809528  353745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:51:11.821759  353745 api_server.go:72] duration metric: took 18.801724291s to wait for apiserver process to appear ...
	I0916 11:51:11.821784  353745 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:51:11.821807  353745 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:51:11.825478  353745 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:51:11.826388  353745 api_server.go:141] control plane version: v1.31.1
	I0916 11:51:11.826412  353745 api_server.go:131] duration metric: took 4.6217ms to wait for apiserver health ...
	I0916 11:51:11.826420  353745 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:51:12.013073  353745 system_pods.go:59] 8 kube-system pods found
	I0916 11:51:12.013103  353745 system_pods.go:61] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:51:12.013109  353745 system_pods.go:61] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:51:12.013112  353745 system_pods.go:61] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:51:12.013116  353745 system_pods.go:61] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:51:12.013120  353745 system_pods.go:61] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:51:12.013123  353745 system_pods.go:61] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:51:12.013127  353745 system_pods.go:61] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:51:12.013133  353745 system_pods.go:61] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:51:12.013141  353745 system_pods.go:74] duration metric: took 186.714262ms to wait for pod list to return data ...
	I0916 11:51:12.013150  353745 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:51:12.209497  353745 default_sa.go:45] found service account: "default"
	I0916 11:51:12.209523  353745 default_sa.go:55] duration metric: took 196.365905ms for default service account to be created ...
	I0916 11:51:12.209532  353745 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:51:12.411009  353745 system_pods.go:86] 8 kube-system pods found
	I0916 11:51:12.411045  353745 system_pods.go:89] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:51:12.411056  353745 system_pods.go:89] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:51:12.411063  353745 system_pods.go:89] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:51:12.411069  353745 system_pods.go:89] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:51:12.411075  353745 system_pods.go:89] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:51:12.411080  353745 system_pods.go:89] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:51:12.411085  353745 system_pods.go:89] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:51:12.411090  353745 system_pods.go:89] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:51:12.411104  353745 system_pods.go:126] duration metric: took 201.565069ms to wait for k8s-apps to be running ...
	I0916 11:51:12.411116  353745 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:51:12.411160  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:51:12.422546  353745 system_svc.go:56] duration metric: took 11.421673ms WaitForService to wait for kubelet
	I0916 11:51:12.422583  353745 kubeadm.go:582] duration metric: took 19.402550835s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:51:12.422611  353745 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:51:12.609131  353745 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:51:12.609166  353745 node_conditions.go:123] node cpu capacity is 8
	I0916 11:51:12.609185  353745 node_conditions.go:105] duration metric: took 186.568247ms to run NodePressure ...
	I0916 11:51:12.609200  353745 start.go:241] waiting for startup goroutines ...
	I0916 11:51:12.609211  353745 start.go:246] waiting for cluster config update ...
	I0916 11:51:12.609225  353745 start.go:255] writing updated cluster config ...
	I0916 11:51:12.659042  353745 ssh_runner.go:195] Run: rm -f paused
	I0916 11:51:12.751470  353745 out.go:177] * Done! kubectl is now configured to use "no-preload-179932" cluster and "default" namespace by default
	E0916 11:51:12.791894  353745 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.679711273Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=038565d4-4b28-4f92-9f5c-1f345ce69cae name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.681554773Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-sfxnk Namespace:kube-system ID:9b913c18240cf0e8dd7d375145b81c674010cafd0f8eb5bf5fb483007b2b3943 UID:ec2c3f40-5323-4dce-ae07-29c4537f3067 NetNS:/var/run/netns/5e4fb530-c158-4e60-887b-fdcc17b17070 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.681861332Z" level=info msg="Checking pod kube-system_coredns-7c65d6cfc9-sfxnk for CNI network kindnet (type=ptp)"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.682599633Z" level=info msg="Ran pod sandbox 12785168d30bd14a1cc2dc6399b74aa1137f3ce5f50dbac8ec101d017e6338ac with infra container: kube-system/storage-provisioner/POD" id=038565d4-4b28-4f92-9f5c-1f345ce69cae name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.683730445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=520286a6-6890-419a-a6e2-7f8515d87263 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.683936535Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=520286a6-6890-419a-a6e2-7f8515d87263 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.684321166Z" level=info msg="Ran pod sandbox 9b913c18240cf0e8dd7d375145b81c674010cafd0f8eb5bf5fb483007b2b3943 with infra container: kube-system/coredns-7c65d6cfc9-sfxnk/POD" id=4cc1ac77-30aa-421a-8455-1424a48b7b45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.684632672Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=add2cd53-f92d-44d3-8e9e-f4a80f587174 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.684867835Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=add2cd53-f92d-44d3-8e9e-f4a80f587174 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685233945Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=9b827247-ab9f-48bf-b03b-b2d945994edf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685445748Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:bb97ed7cb2429a420726fbc329199f4600f59ea307bf93745052a9dd7e3f9955],Size_:63269914,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=9b827247-ab9f-48bf-b03b-b2d945994edf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685547596Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=863009b1-e288-4943-af1a-62501e05710f name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685634399Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686014530Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=8fa9e99f-9f75-4dba-92e6-a499f81e7d6e name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686178090Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:bb97ed7cb2429a420726fbc329199f4600f59ea307bf93745052a9dd7e3f9955],Size_:63269914,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8fa9e99f-9f75-4dba-92e6-a499f81e7d6e name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686818980Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-sfxnk/coredns" id=4f48f1dd-9fe1-44ee-bc6a-ca92014a90e9 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686894528Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.695539343Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7d28120e820275943bf61bbc418c5d626d58de7cf91c37ff58a8d3f09511b328/merged/etc/passwd: no such file or directory"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.695574739Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7d28120e820275943bf61bbc418c5d626d58de7cf91c37ff58a8d3f09511b328/merged/etc/group: no such file or directory"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.732731855Z" level=info msg="Created container 319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c: kube-system/storage-provisioner/storage-provisioner" id=863009b1-e288-4943-af1a-62501e05710f name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.733590659Z" level=info msg="Starting container: 319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c" id=413ac176-bf5d-4bdb-85b8-9aee1826b477 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.740503723Z" level=info msg="Started container" PID=3240 containerID=319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c description=kube-system/storage-provisioner/storage-provisioner id=413ac176-bf5d-4bdb-85b8-9aee1826b477 name=/runtime.v1.RuntimeService/StartContainer sandboxID=12785168d30bd14a1cc2dc6399b74aa1137f3ce5f50dbac8ec101d017e6338ac
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.744459532Z" level=info msg="Created container 1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac: kube-system/coredns-7c65d6cfc9-sfxnk/coredns" id=4f48f1dd-9fe1-44ee-bc6a-ca92014a90e9 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.745066701Z" level=info msg="Starting container: 1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac" id=dd10f8cb-ef54-4a2d-9029-849d6f82fa90 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.751399545Z" level=info msg="Started container" PID=3255 containerID=1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac description=kube-system/coredns-7c65d6cfc9-sfxnk/coredns id=dd10f8cb-ef54-4a2d-9029-849d6f82fa90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b913c18240cf0e8dd7d375145b81c674010cafd0f8eb5bf5fb483007b2b3943
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a534bc0b815b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     3 seconds ago       Running             coredns                   0                   9b913c18240cf       coredns-7c65d6cfc9-sfxnk
	319ec20c27cc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     3 seconds ago       Running             storage-provisioner       0                   12785168d30bd       storage-provisioner
	4d6a1ab5026f1       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   14 seconds ago      Running             kindnet-cni               0                   c69d7a8de2d53       kindnet-2678b
	589063428fb28       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                     18 seconds ago      Running             kube-proxy                0                   c69cfe3f95afb       kube-proxy-ckd46
	4a9a8c6b23212       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                     30 seconds ago      Running             kube-controller-manager   0                   12f1b77dcc6a5       kube-controller-manager-no-preload-179932
	6aec60ed07214       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                     30 seconds ago      Running             kube-scheduler            0                   c99d8af113358       kube-scheduler-no-preload-179932
	8d5a1ec60515c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     30 seconds ago      Running             etcd                      0                   36eff604d6002       etcd-no-preload-179932
	3a0b6ce23d737       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                     30 seconds ago      Running             kube-apiserver            0                   e61434917d78a       kube-apiserver-no-preload-179932
	
	
	==> coredns [1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52027 - 34155 "HINFO IN 7043137295982352462.1682836216271367565. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011352241s
	
	
	==> describe nodes <==
	Name:               no-preload-179932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-179932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-179932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_50_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:50:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-179932
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:51:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:51:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-179932
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2b5d727e19a44ae98155858b9a8e152
	  System UUID:                93f9cbba-c2f8-4376-ab54-e687ad96b58b
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sfxnk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     21s
	  kube-system                 etcd-no-preload-179932                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-2678b                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-no-preload-179932             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-no-preload-179932    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-ckd46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-no-preload-179932             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 18s   kube-proxy       
	  Normal   Starting                 27s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 27s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  26s   kubelet          Node no-preload-179932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26s   kubelet          Node no-preload-179932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26s   kubelet          Node no-preload-179932 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22s   node-controller  Node no-preload-179932 event: Registered Node no-preload-179932 in Controller
	  Normal   NodeReady                4s    kubelet          Node no-preload-179932 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [8d5a1ec60515c3d2cf2ca04cb04d81bb6e475fd0facec6605bc2f2857dca90f5] <==
	{"level":"info","ts":"2024-09-16T11:50:43.301859Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:50:43.302121Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:50:43.302157Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:50:43.302244Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:50:43.302271Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:50:43.828178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.829531Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.829807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:50:43.829807Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-179932 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:50:43.829838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:50:43.830143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:50:43.830179Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:50:43.830312Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.830401Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.830433Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.831241Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:50:43.831311Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:50:43.832100Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T11:50:43.832209Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:51:14 up  1:33,  0 users,  load average: 1.39, 0.88, 0.83
	Linux no-preload-179932 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4d6a1ab5026f16f7b6b74929edce565d1b79109723753135d31aaf14d219b7b2] <==
	I0916 11:50:59.494689       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:50:59.494926       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0916 11:50:59.495223       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:50:59.495243       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:50:59.495259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:50:59.893976       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:50:59.893995       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:50:59.894000       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:51:00.094309       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:51:00.094342       1 metrics.go:61] Registering metrics
	I0916 11:51:00.094412       1 controller.go:374] Syncing nftables rules
	I0916 11:51:09.898148       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:51:09.898215       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3a0b6ce23d7370d3f0843ffa20a8f351fadb19d104cdb3b6c793368ecae40e03] <==
	I0916 11:50:45.320269       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:50:45.320285       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:50:45.320376       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:50:45.320409       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:50:45.320417       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:50:45.320424       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:50:45.320429       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:50:45.320609       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:50:45.321396       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:50:45.393816       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:50:46.224206       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:50:46.229372       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:50:46.229392       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:50:46.702715       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:50:46.742521       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:50:46.830384       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:50:46.836946       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:50:46.838174       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:50:46.842062       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:50:47.301396       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:50:48.037176       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:50:48.045066       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:50:48.053110       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:50:52.653935       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:50:53.055171       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4a9a8c6b232126b3a3f834266ab09739227dd047f65a57809b27690d13071f64] <==
	I0916 11:50:52.251935       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 11:50:52.256873       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:50:52.258059       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:50:52.302452       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 11:50:52.302471       1 shared_informer.go:320] Caches are synced for expand
	I0916 11:50:52.302534       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 11:50:52.668325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:50:52.701193       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:50:52.701224       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:50:53.010133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:50:53.220087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="562.02849ms"
	I0916 11:50:53.228924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.785211ms"
	I0916 11:50:53.229036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.151µs"
	I0916 11:50:53.229283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.06µs"
	I0916 11:50:53.642413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.624ms"
	I0916 11:50:53.693844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.377531ms"
	I0916 11:50:53.694092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.73µs"
	I0916 11:51:10.344634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:10.352621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:10.357500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.304µs"
	I0916 11:51:10.378868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="91.565µs"
	I0916 11:51:11.071302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.849603ms"
	I0916 11:51:11.071412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.462µs"
	I0916 11:51:12.024328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:12.024346       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [589063428fb28a5c87aad20f178d6bbf4342f3d4061b3649c5a14a2f2612be36] <==
	I0916 11:50:55.310468       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:50:55.424560       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:50:55.424623       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:50:55.443689       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:50:55.443760       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:50:55.445611       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:50:55.446002       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:50:55.446028       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:50:55.447139       1 config.go:328] "Starting node config controller"
	I0916 11:50:55.447220       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:50:55.447186       1 config.go:199] "Starting service config controller"
	I0916 11:50:55.447262       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:50:55.447132       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:50:55.447289       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:50:55.547414       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:50:55.547439       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:50:55.547449       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6aec60ed072148d0a4ddf5d94e307f15b744a472ca2e73827876970e20146006] <==
	W0916 11:50:45.313624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0916 11:50:45.313521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.313641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 11:50:45.313653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.313693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:50:45.313744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:50:45.313988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.314171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.314196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.120634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:46.120679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.331027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:50:46.331070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.353666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:50:46.353722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.411296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:50:46.411335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.429940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:50:46.429988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.458442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:50:46.458490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 11:50:46.910182       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294366    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c024fac-4113-4c1b-8b50-3e066e7b9b67-lib-modules\") pod \"kube-proxy-ckd46\" (UID: \"2c024fac-4113-4c1b-8b50-3e066e7b9b67\") " pod="kube-system/kube-proxy-ckd46"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294390    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-cni-cfg\") pod \"kindnet-2678b\" (UID: \"28d0afc4-03fd-4b6e-8ced-8b440d6153ff\") " pod="kube-system/kindnet-2678b"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294658    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-xtables-lock\") pod \"kindnet-2678b\" (UID: \"28d0afc4-03fd-4b6e-8ced-8b440d6153ff\") " pod="kube-system/kindnet-2678b"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294702    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltv87\" (UniqueName: \"kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87\") pod \"kube-proxy-ckd46\" (UID: \"2c024fac-4113-4c1b-8b50-3e066e7b9b67\") " pod="kube-system/kube-proxy-ckd46"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294726    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-lib-modules\") pod \"kindnet-2678b\" (UID: \"28d0afc4-03fd-4b6e-8ced-8b440d6153ff\") " pod="kube-system/kindnet-2678b"
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.402869    2607 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.402933    2607 projected.go:194] Error preparing data for projected volume kube-api-access-ltv87 for pod kube-system/kube-proxy-ckd46: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.402869    2607 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403023    2607 projected.go:194] Error preparing data for projected volume kube-api-access-mpmnk for pod kube-system/kindnet-2678b: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403029    2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87 podName:2c024fac-4113-4c1b-8b50-3e066e7b9b67 nodeName:}" failed. No retries permitted until 2024-09-16 11:50:54.902996386 +0000 UTC m=+7.050915339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltv87" (UniqueName: "kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87") pod "kube-proxy-ckd46" (UID: "2c024fac-4113-4c1b-8b50-3e066e7b9b67") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403062    2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-kube-api-access-mpmnk podName:28d0afc4-03fd-4b6e-8ced-8b440d6153ff nodeName:}" failed. No retries permitted until 2024-09-16 11:50:54.903050076 +0000 UTC m=+7.050969022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mpmnk" (UniqueName: "kubernetes.io/projected/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-kube-api-access-mpmnk") pod "kindnet-2678b" (UID: "28d0afc4-03fd-4b6e-8ced-8b440d6153ff") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: I0916 11:50:54.905280    2607 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:50:56 no-preload-179932 kubelet[2607]: I0916 11:50:56.027618    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ckd46" podStartSLOduration=3.027598211 podStartE2EDuration="3.027598211s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:50:56.027415248 +0000 UTC m=+8.175334204" watchObservedRunningTime="2024-09-16 11:50:56.027598211 +0000 UTC m=+8.175517164"
	Sep 16 11:50:58 no-preload-179932 kubelet[2607]: E0916 11:50:58.016042    2607 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487458015815892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92080,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:50:58 no-preload-179932 kubelet[2607]: E0916 11:50:58.016091    2607 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487458015815892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92080,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:02 no-preload-179932 kubelet[2607]: I0916 11:51:02.695100    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2678b" podStartSLOduration=5.586249323 podStartE2EDuration="9.695079637s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="2024-09-16 11:50:55.227033013 +0000 UTC m=+7.374951948" lastFinishedPulling="2024-09-16 11:50:59.335863327 +0000 UTC m=+11.483782262" observedRunningTime="2024-09-16 11:51:00.036700007 +0000 UTC m=+12.184618972" watchObservedRunningTime="2024-09-16 11:51:02.695079637 +0000 UTC m=+14.842998616"
	Sep 16 11:51:08 no-preload-179932 kubelet[2607]: E0916 11:51:08.017291    2607 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487468017096773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:08 no-preload-179932 kubelet[2607]: E0916 11:51:08.017367    2607 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487468017096773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.337301    2607 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518235    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qdf\" (UniqueName: \"kubernetes.io/projected/ec2c3f40-5323-4dce-ae07-29c4537f3067-kube-api-access-24qdf\") pod \"coredns-7c65d6cfc9-sfxnk\" (UID: \"ec2c3f40-5323-4dce-ae07-29c4537f3067\") " pod="kube-system/coredns-7c65d6cfc9-sfxnk"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518285    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdhnp\" (UniqueName: \"kubernetes.io/projected/040e8794-ddea-4f91-b709-cb999b3c71d5-kube-api-access-tdhnp\") pod \"storage-provisioner\" (UID: \"040e8794-ddea-4f91-b709-cb999b3c71d5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518302    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec2c3f40-5323-4dce-ae07-29c4537f3067-config-volume\") pod \"coredns-7c65d6cfc9-sfxnk\" (UID: \"ec2c3f40-5323-4dce-ae07-29c4537f3067\") " pod="kube-system/coredns-7c65d6cfc9-sfxnk"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518330    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/040e8794-ddea-4f91-b709-cb999b3c71d5-tmp\") pod \"storage-provisioner\" (UID: \"040e8794-ddea-4f91-b709-cb999b3c71d5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:51:11 no-preload-179932 kubelet[2607]: I0916 11:51:11.055193    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=18.055168777 podStartE2EDuration="18.055168777s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:51:11.055129269 +0000 UTC m=+23.203048223" watchObservedRunningTime="2024-09-16 11:51:11.055168777 +0000 UTC m=+23.203087726"
	Sep 16 11:51:11 no-preload-179932 kubelet[2607]: I0916 11:51:11.065541    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sfxnk" podStartSLOduration=18.06551962 podStartE2EDuration="18.06551962s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:51:11.065119525 +0000 UTC m=+23.213038480" watchObservedRunningTime="2024-09-16 11:51:11.06551962 +0000 UTC m=+23.213438552"
	
	
	==> storage-provisioner [319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c] <==
	I0916 11:51:10.752747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:51:10.762574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:51:10.762667       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:51:10.798892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:51:10.799029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6492543-a96c-4e35-8fc0-19e6c7bc9c6d", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9 became leader
	I0916 11:51:10.799116       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9!
	I0916 11:51:10.899335       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (503.21µs)
helpers_test.go:263: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-179932
helpers_test.go:235: (dbg) docker inspect no-preload-179932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db",
	        "Created": "2024-09-16T11:50:18.324141753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354317,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:50:18.460923195Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hostname",
	        "HostsPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hosts",
	        "LogPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db-json.log",
	        "Name": "/no-preload-179932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-179932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-179932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-179932",
	                "Source": "/var/lib/docker/volumes/no-preload-179932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-179932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-179932",
	                "name.minikube.sigs.k8s.io": "no-preload-179932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7cd51b56ae0e7b9c36d315b4ce9fb777c38e910770cfb5f1f448c928dadda05",
	            "SandboxKey": "/var/run/docker/netns/a7cd51b56ae0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-179932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3318c5c795cbdaf6a4546ff9f05fc1f3534565776857632d9afa204a3c5ca91f",
	                    "EndpointID": "1762fc6325de440c55f237e57f8ef1680b848810c568c35778055aedb3d79112",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-179932",
	                        "33415cb7fa83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-179932 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-179932 logs -n 25: (1.071101242s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                  |                              |         |         |                     |                     |
	|         | containerd --all --full                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                          |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467        | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:50:17
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:50:17.261646  353745 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:50:17.261961  353745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:50:17.261974  353745 out.go:358] Setting ErrFile to fd 2...
	I0916 11:50:17.261981  353745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:50:17.262273  353745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:50:17.263118  353745 out.go:352] Setting JSON to false
	I0916 11:50:17.264280  353745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5557,"bootTime":1726481860,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:50:17.264369  353745 start.go:139] virtualization: kvm guest
	I0916 11:50:17.267026  353745 out.go:177] * [no-preload-179932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:50:17.268879  353745 notify.go:220] Checking for updates...
	I0916 11:50:17.268946  353745 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:50:17.270731  353745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:50:17.272238  353745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:50:17.273551  353745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:50:17.275161  353745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:50:17.276866  353745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:50:17.279205  353745 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279359  353745 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279497  353745 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279614  353745 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:50:17.307569  353745 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:50:17.307662  353745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:50:17.364583  353745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:50:17.353613217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:50:17.364687  353745 docker.go:318] overlay module found
	I0916 11:50:17.367827  353745 out.go:177] * Using the docker driver based on user configuration
	I0916 11:50:17.369319  353745 start.go:297] selected driver: docker
	I0916 11:50:17.369364  353745 start.go:901] validating driver "docker" against <nil>
	I0916 11:50:17.369380  353745 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:50:17.370517  353745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:50:17.426383  353745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:50:17.415784753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:50:17.426604  353745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:50:17.426824  353745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:50:17.428784  353745 out.go:177] * Using Docker driver with root privileges
	I0916 11:50:17.430291  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:17.430351  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:17.430360  353745 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:50:17.430422  353745 start.go:340] cluster config:
	{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:50:17.432336  353745 out.go:177] * Starting "no-preload-179932" primary control-plane node in "no-preload-179932" cluster
	I0916 11:50:17.434034  353745 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:50:17.435683  353745 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:50:17.436991  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:50:17.437122  353745 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:50:17.437157  353745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:50:17.437183  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json: {Name:mkc16156d5a07d416da64f9d96a3502b09dcbb6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:17.437384  353745 cache.go:107] acquiring lock: {Name:mk871ae736ce09ba2b4421598649b9ecfc9a98bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437387  353745 cache.go:107] acquiring lock: {Name:mk8b23bbceb92ce965299065ca3d25050387467b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437413  353745 cache.go:107] acquiring lock: {Name:mk0d227841b16d1443985320c46c5945df5de856 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437384  353745 cache.go:107] acquiring lock: {Name:mkc9fa4e48807b59cdf7eefb19d5245546dc831d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437456  353745 cache.go:107] acquiring lock: {Name:mkf3f21a53f01d1ee0608b28c94cf582dc8c355f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437403  353745 cache.go:107] acquiring lock: {Name:mk540470437675d9c95f2acaf015b6015148e24f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437530  353745 cache.go:107] acquiring lock: {Name:mkbb0d7522afd30851ddf834442136fb3567a26a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437558  353745 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:50:17.437616  353745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:17.437629  353745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:17.437676  353745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:17.437698  353745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:17.437787  353745 cache.go:107] acquiring lock: {Name:mkfcf90f9df5885fe87d6ff86cdb7f8f58dec344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437843  353745 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:50:17.437856  353745 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 477.041µs
	I0916 11:50:17.437874  353745 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:50:17.437894  353745 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:17.437975  353745 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:17.439129  353745 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:50:17.439139  353745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:17.439178  353745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:17.439228  353745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:17.439303  353745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:17.439442  353745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:17.439509  353745 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	W0916 11:50:17.465435  353745 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:50:17.465457  353745 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:50:17.465523  353745 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:50:17.465535  353745 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:50:17.465539  353745 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:50:17.465546  353745 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:50:17.465551  353745 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:50:17.540421  353745 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:50:17.540482  353745 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:50:17.540523  353745 start.go:360] acquireMachinesLock for no-preload-179932: {Name:mkd475c3f7aed9017143023aeb4fceb62fe6c60d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.540666  353745 start.go:364] duration metric: took 116.626µs to acquireMachinesLock for "no-preload-179932"
	I0916 11:50:17.540697  353745 start.go:93] Provisioning new machine with config: &{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:50:17.540799  353745 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:50:17.543760  353745 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:50:17.544066  353745 start.go:159] libmachine.API.Create for "no-preload-179932" (driver="docker")
	I0916 11:50:17.544097  353745 client.go:168] LocalClient.Create starting
	I0916 11:50:17.544177  353745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:50:17.544211  353745 main.go:141] libmachine: Decoding PEM data...
	I0916 11:50:17.544230  353745 main.go:141] libmachine: Parsing certificate...
	I0916 11:50:17.544292  353745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:50:17.544320  353745 main.go:141] libmachine: Decoding PEM data...
	I0916 11:50:17.544336  353745 main.go:141] libmachine: Parsing certificate...
	I0916 11:50:17.544768  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:50:17.563971  353745 cli_runner.go:211] docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:50:17.564043  353745 network_create.go:284] running [docker network inspect no-preload-179932] to gather additional debugging logs...
	I0916 11:50:17.564060  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932
	W0916 11:50:17.581522  353745 cli_runner.go:211] docker network inspect no-preload-179932 returned with exit code 1
	I0916 11:50:17.581552  353745 network_create.go:287] error running [docker network inspect no-preload-179932]: docker network inspect no-preload-179932: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-179932 not found
	I0916 11:50:17.581569  353745 network_create.go:289] output of [docker network inspect no-preload-179932]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-179932 not found
	
	** /stderr **
	I0916 11:50:17.581662  353745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:50:17.600809  353745 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:50:17.601729  353745 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:50:17.602523  353745 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:50:17.603150  353745 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:50:17.603787  353745 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:50:17.604419  353745 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:50:17.605797  353745 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039cfe0}
	I0916 11:50:17.605828  353745 network_create.go:124] attempt to create docker network no-preload-179932 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:50:17.605872  353745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-179932 no-preload-179932
	I0916 11:50:17.676431  353745 network_create.go:108] docker network no-preload-179932 192.168.103.0/24 created
	I0916 11:50:17.676472  353745 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-179932" container
	I0916 11:50:17.676527  353745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:50:17.695151  353745 cli_runner.go:164] Run: docker volume create no-preload-179932 --label name.minikube.sigs.k8s.io=no-preload-179932 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:50:17.716208  353745 oci.go:103] Successfully created a docker volume no-preload-179932
	I0916 11:50:17.716280  353745 cli_runner.go:164] Run: docker run --rm --name no-preload-179932-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-179932 --entrypoint /usr/bin/test -v no-preload-179932:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:50:17.982139  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:50:18.004879  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:50:18.032231  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:50:18.062798  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:50:18.064953  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:50:18.071480  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:50:18.072209  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:50:18.157840  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:50:18.157871  353745 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 720.488492ms
	I0916 11:50:18.157891  353745 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:50:18.244108  353745 oci.go:107] Successfully prepared a docker volume no-preload-179932
	I0916 11:50:18.244138  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	W0916 11:50:18.244297  353745 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:50:18.244412  353745 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:50:18.303137  353745 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-179932 --name no-preload-179932 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-179932 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-179932 --network no-preload-179932 --ip 192.168.103.2 --volume no-preload-179932:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:50:18.643596  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Running}}
	I0916 11:50:18.667792  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.688027  353745 cli_runner.go:164] Run: docker exec no-preload-179932 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:50:18.735261  353745 oci.go:144] the created container "no-preload-179932" has a running status.
	I0916 11:50:18.735326  353745 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa...
	I0916 11:50:18.766733  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:50:18.766766  353745 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 1.329386554s
	I0916 11:50:18.766783  353745 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:50:18.853467  353745 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:50:18.875421  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.894347  353745 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:50:18.894368  353745 kic_runner.go:114] Args: [docker exec --privileged no-preload-179932 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:50:18.942980  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.964524  353745 machine.go:93] provisionDockerMachine start ...
	I0916 11:50:18.964628  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:18.985177  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:18.985626  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:18.985648  353745 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:50:18.986437  353745 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52304->127.0.0.1:33098: read: connection reset by peer
	I0916 11:50:20.352937  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:50:20.352965  353745 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.91554704s
	I0916 11:50:20.352978  353745 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:50:20.375094  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:50:20.375146  353745 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.93769009s
	I0916 11:50:20.375162  353745 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:50:20.404338  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:50:20.404368  353745 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 2.967049618s
	I0916 11:50:20.404383  353745 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:50:20.440630  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:50:20.440662  353745 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.002881935s
	I0916 11:50:20.440675  353745 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:50:20.758418  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:50:20.758445  353745 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 3.321045606s
	I0916 11:50:20.758457  353745 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:50:20.758473  353745 cache.go:87] Successfully saved all images to host disk.
	I0916 11:50:22.121000  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:50:22.121029  353745 ubuntu.go:169] provisioning hostname "no-preload-179932"
	I0916 11:50:22.121084  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.139064  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.139265  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.139281  353745 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-179932 && echo "no-preload-179932" | sudo tee /etc/hostname
	I0916 11:50:22.285481  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:50:22.285587  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.303430  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.303635  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.303653  353745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-179932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-179932/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-179932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:50:22.441654  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:50:22.441687  353745 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:50:22.441713  353745 ubuntu.go:177] setting up certificates
	I0916 11:50:22.441726  353745 provision.go:84] configureAuth start
	I0916 11:50:22.441784  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:22.459186  353745 provision.go:143] copyHostCerts
	I0916 11:50:22.459247  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:50:22.459254  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:50:22.459318  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:50:22.459401  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:50:22.459412  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:50:22.459436  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:50:22.459501  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:50:22.459509  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:50:22.459529  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:50:22.459579  353745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.no-preload-179932 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-179932]
	I0916 11:50:22.604596  353745 provision.go:177] copyRemoteCerts
	I0916 11:50:22.604661  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:50:22.604696  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.623335  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:22.722150  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:50:22.744937  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:50:22.767660  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:50:22.790813  353745 provision.go:87] duration metric: took 349.073566ms to configureAuth
	I0916 11:50:22.790843  353745 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:50:22.791022  353745 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:22.791130  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.809366  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.809570  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.809594  353745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:50:23.037925  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:50:23.037948  353745 machine.go:96] duration metric: took 4.073399787s to provisionDockerMachine
	I0916 11:50:23.037960  353745 client.go:171] duration metric: took 5.493852423s to LocalClient.Create
	I0916 11:50:23.037983  353745 start.go:167] duration metric: took 5.493918053s to libmachine.API.Create "no-preload-179932"
	I0916 11:50:23.037991  353745 start.go:293] postStartSetup for "no-preload-179932" (driver="docker")
	I0916 11:50:23.038043  353745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:50:23.038130  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:50:23.038173  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.057110  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.155780  353745 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:50:23.158999  353745 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:50:23.159029  353745 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:50:23.159036  353745 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:50:23.159042  353745 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:50:23.159052  353745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:50:23.159108  353745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:50:23.159178  353745 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:50:23.159265  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:50:23.168631  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:50:23.191792  353745 start.go:296] duration metric: took 153.784247ms for postStartSetup
	I0916 11:50:23.192189  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:23.210469  353745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:50:23.210780  353745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:50:23.210826  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.228693  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.322250  353745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:50:23.326606  353745 start.go:128] duration metric: took 5.78575133s to createHost
	I0916 11:50:23.326630  353745 start.go:83] releasing machines lock for "no-preload-179932", held for 5.785949248s
	I0916 11:50:23.326688  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:23.345016  353745 ssh_runner.go:195] Run: cat /version.json
	I0916 11:50:23.345063  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.345140  353745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:50:23.345213  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.364213  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.365476  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.539384  353745 ssh_runner.go:195] Run: systemctl --version
	I0916 11:50:23.544045  353745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:50:23.682500  353745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:50:23.686822  353745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:50:23.705505  353745 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:50:23.705596  353745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:50:23.735375  353745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:50:23.735406  353745 start.go:495] detecting cgroup driver to use...
	I0916 11:50:23.735443  353745 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:50:23.735487  353745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:50:23.751165  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:50:23.762367  353745 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:50:23.762424  353745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:50:23.776422  353745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:50:23.790314  353745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:50:23.871070  353745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:50:23.955641  353745 docker.go:233] disabling docker service ...
	I0916 11:50:23.955704  353745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:50:23.974798  353745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:50:23.986320  353745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:50:24.066055  353745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:50:24.154083  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:50:24.165011  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:50:24.180586  353745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:50:24.180688  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.189971  353745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:50:24.190024  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.199843  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.209792  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.219702  353745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:50:24.228365  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.237703  353745 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.252615  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.261804  353745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:50:24.269676  353745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:50:24.278212  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:24.351610  353745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:50:24.760310  353745 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:50:24.760392  353745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:50:24.763747  353745 start.go:563] Will wait 60s for crictl version
	I0916 11:50:24.763819  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:24.767047  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:50:24.799325  353745 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:50:24.799407  353745 ssh_runner.go:195] Run: crio --version
	I0916 11:50:24.833821  353745 ssh_runner.go:195] Run: crio --version
	I0916 11:50:24.872021  353745 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:50:24.873644  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:50:24.890696  353745 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:50:24.894309  353745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:50:24.905242  353745 kubeadm.go:883] updating cluster {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:50:24.905402  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:50:24.905459  353745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:50:24.938604  353745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:50:24.938629  353745 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:50:24.938703  353745 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:24.938734  353745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:24.938778  353745 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:24.938807  353745 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:50:24.938828  353745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:24.938854  353745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:24.938794  353745 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:24.938984  353745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:24.939961  353745 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:24.939978  353745 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:24.940164  353745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:24.940207  353745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:24.940241  353745 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:50:24.940248  353745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:24.940172  353745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:24.940170  353745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.118753  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.154474  353745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0916 11:50:25.154512  353745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.154548  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.157855  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.162753  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.167885  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.174842  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.177553  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0916 11:50:25.199771  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.199957  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.270508  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.296799  353745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0916 11:50:25.296844  353745 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0916 11:50:25.296908  353745 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.296933  353745 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0916 11:50:25.296853  353745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.296965  353745 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.296980  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.296993  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.297001  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.297054  353745 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0916 11:50:25.297079  353745 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0916 11:50:25.297108  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.320461  353745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0916 11:50:25.320506  353745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.320553  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.320578  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.333783  353745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0916 11:50:25.333833  353745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.333854  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.333872  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.333870  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.333904  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.333948  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.333962  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.414304  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:50:25.414412  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:25.504551  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.504652  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.504665  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.504697  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.504743  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.504760  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.504802  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.1': No such file or directory
	I0916 11:50:25.504831  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 --> /var/lib/minikube/images/kube-scheduler_v1.31.1 (20187136 bytes)
	I0916 11:50:25.715489  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.715508  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.715538  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.715600  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.715604  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.715659  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.913649  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:50:25.913683  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.913700  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:50:25.913708  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:50:25.913757  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0916 11:50:25.913757  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:50:25.913785  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:25.913799  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:25.913659  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:50:25.913838  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:25.913889  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928792  353745 retry.go:31] will retry after 284.043253ms: ssh: rejected: connect failed (open failed)
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928820  353745 retry.go:31] will retry after 206.277714ms: ssh: rejected: connect failed (open failed)
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928832  353745 retry.go:31] will retry after 258.129273ms: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.955883  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.1': No such file or directory
	I0916 11:50:25.955923  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 --> /var/lib/minikube/images/kube-proxy_v1.31.1 (30214144 bytes)
	I0916 11:50:25.955990  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:25.955998  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I0916 11:50:25.956027  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I0916 11:50:25.956080  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:25.979690  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:25.980957  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.009367  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:26.009427  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.015683  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:50:26.015784  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:26.015850  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.020816  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:26.020879  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:26.020938  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.035542  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.037133  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.041968  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.219884  353745 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:50:26.219941  353745 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:26.219994  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:26.219941  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.1': No such file or directory
	I0916 11:50:26.220069  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 --> /var/lib/minikube/images/kube-controller-manager_v1.31.1 (26231808 bytes)
	I0916 11:50:28.111335  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.090425901s)
	I0916 11:50:28.111372  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0916 11:50:28.111392  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:28.111394  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.197583966s)
	I0916 11:50:28.111426  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0916 11:50:28.111436  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:28.111440  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: (2.197664353s)
	I0916 11:50:28.111456  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0916 11:50:28.111476  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0916 11:50:28.111454  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0916 11:50:28.111523  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.197610351s)
	I0916 11:50:28.111565  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.1': No such file or directory
	I0916 11:50:28.111596  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 --> /var/lib/minikube/images/kube-apiserver_v1.31.1 (28057088 bytes)
	I0916 11:50:28.111571  353745 ssh_runner.go:235] Completed: which crictl: (1.891560983s)
	I0916 11:50:28.111720  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:29.915246  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.803785881s)
	I0916 11:50:29.915276  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0916 11:50:29.915301  353745 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:29.915321  353745 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.803577324s)
	I0916 11:50:29.915347  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:29.915396  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:32.399830  353745 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.48440876s)
	I0916 11:50:32.399928  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:32.399839  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (2.484470985s)
	I0916 11:50:32.399960  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0916 11:50:32.399988  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:32.400032  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:32.436189  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:50:32.436293  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:33.746085  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.309767608s)
	I0916 11:50:33.746123  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:50:33.746085  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.346024308s)
	I0916 11:50:33.746143  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:50:33.746147  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0916 11:50:33.746168  353745 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0916 11:50:33.746219  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0916 11:50:33.886742  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0916 11:50:33.886791  353745 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:33.886847  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:35.329396  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.442524266s)
	I0916 11:50:35.329425  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0916 11:50:35.329448  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:50:35.329494  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:50:36.770428  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.440905892s)
	I0916 11:50:36.770458  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0916 11:50:36.770484  353745 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:36.770529  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:37.409584  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:50:37.409619  353745 cache_images.go:123] Successfully loaded all cached images
	I0916 11:50:37.409625  353745 cache_images.go:92] duration metric: took 12.470984002s to LoadCachedImages
	I0916 11:50:37.409637  353745 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 11:50:37.409719  353745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-179932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:50:37.409783  353745 ssh_runner.go:195] Run: crio config
	I0916 11:50:37.452066  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:37.452086  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:37.452097  353745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:50:37.452115  353745 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-179932 NodeName:no-preload-179932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:50:37.452287  353745 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-179932"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:50:37.452356  353745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:50:37.461638  353745 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:50:37.461710  353745 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:50:37.469780  353745 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:50:37.469859  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:50:37.469894  353745 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 11:50:37.469905  353745 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 11:50:37.473264  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:50:37.473298  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:50:38.361857  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:50:38.365559  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:50:38.365594  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:50:38.493699  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:50:38.508908  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:50:38.512321  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:50:38.512350  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:50:38.676578  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:50:38.685326  353745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0916 11:50:38.701489  353745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:50:38.718627  353745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0916 11:50:38.735122  353745 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:50:38.738342  353745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:50:38.748252  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:38.827198  353745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:50:38.840338  353745 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932 for IP: 192.168.103.2
	I0916 11:50:38.840364  353745 certs.go:194] generating shared ca certs ...
	I0916 11:50:38.840393  353745 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.840560  353745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:50:38.840615  353745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:50:38.840627  353745 certs.go:256] generating profile certs ...
	I0916 11:50:38.840704  353745 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key
	I0916 11:50:38.840723  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt with IP's: []
	I0916 11:50:38.935911  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt ...
	I0916 11:50:38.935940  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: {Name:mkcfebd0395ea27149b681830fddcbfa0b287805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.936111  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key ...
	I0916 11:50:38.936122  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key: {Name:mkedb064e2171125bc65687de4300740d0c5fa5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.936197  353745 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391
	I0916 11:50:38.936211  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:50:39.161110  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 ...
	I0916 11:50:39.161163  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391: {Name:mk6e55865c08038f9c83c62a1e3de8ab46e37505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.161381  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391 ...
	I0916 11:50:39.161403  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391: {Name:mk7fa07a5319463f001b0ea91f26d16d256d3f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.161513  353745 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt
	I0916 11:50:39.161622  353745 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key
	I0916 11:50:39.161703  353745 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key
	I0916 11:50:39.161726  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt with IP's: []
	I0916 11:50:39.230589  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt ...
	I0916 11:50:39.230621  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt: {Name:mk9382a33ca50c5dc46808284f9e12b01271ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.230825  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key ...
	I0916 11:50:39.230843  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key: {Name:mk92c148096f3309b2fe7cab24919949c9166c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.231071  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:50:39.231123  353745 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:50:39.231142  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:50:39.231171  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:50:39.231206  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:50:39.231238  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:50:39.231294  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:50:39.231970  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:50:39.254719  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:50:39.277272  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:50:39.299028  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:50:39.321434  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:50:39.343976  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:50:39.367682  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:50:39.389857  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:50:39.411764  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:50:39.434314  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:50:39.455995  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:50:39.478225  353745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:50:39.493981  353745 ssh_runner.go:195] Run: openssl version
	I0916 11:50:39.498988  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:50:39.507998  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.511432  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.511491  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.518178  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:50:39.528049  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:50:39.538529  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.542466  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.542525  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.550361  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:50:39.559880  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:50:39.569042  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.572563  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.572616  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.578893  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:50:39.587606  353745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:50:39.590786  353745 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:50:39.590838  353745 kubeadm.go:392] StartCluster: {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:50:39.590919  353745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:50:39.590962  353745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:50:39.623993  353745 cri.go:89] found id: ""
	I0916 11:50:39.624065  353745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:50:39.632782  353745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:50:39.641165  353745 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:50:39.641220  353745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:50:39.649467  353745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:50:39.649485  353745 kubeadm.go:157] found existing configuration files:
	
	I0916 11:50:39.649526  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:50:39.657545  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:50:39.657603  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:50:39.665725  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:50:39.674189  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:50:39.674239  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:50:39.681997  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:50:39.690004  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:50:39.690062  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:50:39.697984  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:50:39.706536  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:50:39.706602  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:50:39.714682  353745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:50:39.749285  353745 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:50:39.749390  353745 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:50:39.766004  353745 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:50:39.766125  353745 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:50:39.766178  353745 kubeadm.go:310] OS: Linux
	I0916 11:50:39.766223  353745 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:50:39.766282  353745 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:50:39.766324  353745 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:50:39.766369  353745 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:50:39.766430  353745 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:50:39.766507  353745 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:50:39.766575  353745 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:50:39.766639  353745 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:50:39.766706  353745 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:50:39.816683  353745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:50:39.816778  353745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:50:39.816904  353745 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:50:39.829767  353745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:50:39.833943  353745 out.go:235]   - Generating certificates and keys ...
	I0916 11:50:39.834055  353745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:50:39.834121  353745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:50:39.912342  353745 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:50:39.981611  353745 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:50:40.100442  353745 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:50:40.353713  353745 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:50:40.529814  353745 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:50:40.529974  353745 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-179932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:50:40.662396  353745 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:50:40.662532  353745 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-179932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:50:40.978365  353745 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:50:41.089411  353745 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:50:41.246484  353745 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:50:41.246591  353745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:50:41.338255  353745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:50:41.520493  353745 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:50:41.631124  353745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:50:41.869980  353745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:50:42.120470  353745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:50:42.121129  353745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:50:42.123645  353745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:50:42.125750  353745 out.go:235]   - Booting up control plane ...
	I0916 11:50:42.125883  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:50:42.125983  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:50:42.126071  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:50:42.136142  353745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:50:42.141313  353745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:50:42.141405  353745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:50:42.219091  353745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:50:42.219242  353745 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:50:42.720318  353745 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.359131ms
	I0916 11:50:42.720396  353745 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:50:47.221859  353745 kubeadm.go:310] [api-check] The API server is healthy after 4.501530278s
	I0916 11:50:47.232717  353745 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:50:47.243418  353745 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:50:47.260829  353745 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:50:47.261089  353745 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-179932 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:50:47.268219  353745 kubeadm.go:310] [bootstrap-token] Using token: wbzbzb.swi91qeomz7323fx
	I0916 11:50:47.270698  353745 out.go:235]   - Configuring RBAC rules ...
	I0916 11:50:47.270836  353745 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:50:47.273506  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:50:47.279257  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:50:47.281945  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:50:47.284450  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:50:47.288148  353745 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:50:47.628407  353745 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:50:48.046362  353745 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:50:48.628627  353745 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:50:48.629544  353745 kubeadm.go:310] 
	I0916 11:50:48.629646  353745 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:50:48.629658  353745 kubeadm.go:310] 
	I0916 11:50:48.629750  353745 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:50:48.629775  353745 kubeadm.go:310] 
	I0916 11:50:48.629834  353745 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:50:48.629927  353745 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:50:48.630007  353745 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:50:48.630018  353745 kubeadm.go:310] 
	I0916 11:50:48.630095  353745 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:50:48.630105  353745 kubeadm.go:310] 
	I0916 11:50:48.630171  353745 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:50:48.630180  353745 kubeadm.go:310] 
	I0916 11:50:48.630257  353745 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:50:48.630344  353745 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:50:48.630458  353745 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:50:48.630473  353745 kubeadm.go:310] 
	I0916 11:50:48.630589  353745 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:50:48.630728  353745 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:50:48.630737  353745 kubeadm.go:310] 
	I0916 11:50:48.630851  353745 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wbzbzb.swi91qeomz7323fx \
	I0916 11:50:48.631029  353745 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:50:48.631080  353745 kubeadm.go:310] 	--control-plane 
	I0916 11:50:48.631097  353745 kubeadm.go:310] 
	I0916 11:50:48.631194  353745 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:50:48.631209  353745 kubeadm.go:310] 
	I0916 11:50:48.631311  353745 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wbzbzb.swi91qeomz7323fx \
	I0916 11:50:48.631477  353745 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:50:48.632992  353745 kubeadm.go:310] W0916 11:50:39.746676    2273 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:50:48.633284  353745 kubeadm.go:310] W0916 11:50:39.747329    2273 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:50:48.633518  353745 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:50:48.633654  353745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:50:48.633668  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:48.633678  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:48.636566  353745 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:50:48.638084  353745 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:50:48.642054  353745 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:50:48.642074  353745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:50:48.659841  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:50:48.854859  353745 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:50:48.854907  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:48.854934  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-179932 minikube.k8s.io/updated_at=2024_09_16T11_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=no-preload-179932 minikube.k8s.io/primary=true
	I0916 11:50:48.862914  353745 ops.go:34] apiserver oom_adj: -16
	I0916 11:50:48.947264  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:49.447477  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:49.948030  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:50.448348  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:50.947452  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:51.447333  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:51.947456  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:52.447460  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:52.948258  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:53.018251  353745 kubeadm.go:1113] duration metric: took 4.163399098s to wait for elevateKubeSystemPrivileges
	I0916 11:50:53.018293  353745 kubeadm.go:394] duration metric: took 13.427458529s to StartCluster
	I0916 11:50:53.018313  353745 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:53.018394  353745 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:50:53.019749  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:53.019996  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:50:53.020006  353745 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:50:53.020089  353745 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:50:53.020185  353745 addons.go:69] Setting storage-provisioner=true in profile "no-preload-179932"
	I0916 11:50:53.020206  353745 addons.go:69] Setting default-storageclass=true in profile "no-preload-179932"
	I0916 11:50:53.020229  353745 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-179932"
	I0916 11:50:53.020239  353745 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:53.020210  353745 addons.go:234] Setting addon storage-provisioner=true in "no-preload-179932"
	I0916 11:50:53.020316  353745 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:50:53.020631  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.020797  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.022059  353745 out.go:177] * Verifying Kubernetes components...
	I0916 11:50:53.023597  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:53.044946  353745 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:53.045147  353745 addons.go:234] Setting addon default-storageclass=true in "no-preload-179932"
	I0916 11:50:53.045190  353745 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:50:53.045672  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.046362  353745 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:50:53.046382  353745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:50:53.046420  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:53.067056  353745 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:50:53.067095  353745 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:50:53.067169  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:53.076321  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:53.088844  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:53.210469  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:50:53.312446  353745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:50:53.323161  353745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:50:53.416369  353745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:50:53.603739  353745 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:50:53.605015  353745 node_ready.go:35] waiting up to 6m0s for node "no-preload-179932" to be "Ready" ...
	I0916 11:50:53.841034  353745 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:50:53.842341  353745 addons.go:510] duration metric: took 822.2633ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:50:54.107859  353745 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-179932" context rescaled to 1 replicas
	I0916 11:50:55.608268  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:50:57.608902  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:00.108773  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:02.608151  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:04.608412  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:07.108282  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:09.108982  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:10.608730  353745 node_ready.go:49] node "no-preload-179932" has status "Ready":"True"
	I0916 11:51:10.608754  353745 node_ready.go:38] duration metric: took 17.003714881s for node "no-preload-179932" to be "Ready" ...
	I0916 11:51:10.608765  353745 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:51:10.615200  353745 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.120543  353745 pod_ready.go:93] pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.120590  353745 pod_ready.go:82] duration metric: took 505.366914ms for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.120600  353745 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.124478  353745 pod_ready.go:93] pod "etcd-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.124499  353745 pod_ready.go:82] duration metric: took 3.891956ms for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.124510  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.128756  353745 pod_ready.go:93] pod "kube-apiserver-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.128778  353745 pod_ready.go:82] duration metric: took 4.260684ms for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.128790  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.132774  353745 pod_ready.go:93] pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.132795  353745 pod_ready.go:82] duration metric: took 3.997805ms for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.132806  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.409098  353745 pod_ready.go:93] pod "kube-proxy-ckd46" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.409126  353745 pod_ready.go:82] duration metric: took 276.310033ms for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.409139  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.809415  353745 pod_ready.go:93] pod "kube-scheduler-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.809441  353745 pod_ready.go:82] duration metric: took 400.294201ms for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.809456  353745 pod_ready.go:39] duration metric: took 1.200676939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:51:11.809472  353745 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:51:11.809528  353745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:51:11.821759  353745 api_server.go:72] duration metric: took 18.801724291s to wait for apiserver process to appear ...
	I0916 11:51:11.821784  353745 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:51:11.821807  353745 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:51:11.825478  353745 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:51:11.826388  353745 api_server.go:141] control plane version: v1.31.1
	I0916 11:51:11.826412  353745 api_server.go:131] duration metric: took 4.6217ms to wait for apiserver health ...
	I0916 11:51:11.826420  353745 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:51:12.013073  353745 system_pods.go:59] 8 kube-system pods found
	I0916 11:51:12.013103  353745 system_pods.go:61] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:51:12.013109  353745 system_pods.go:61] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:51:12.013112  353745 system_pods.go:61] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:51:12.013116  353745 system_pods.go:61] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:51:12.013120  353745 system_pods.go:61] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:51:12.013123  353745 system_pods.go:61] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:51:12.013127  353745 system_pods.go:61] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:51:12.013133  353745 system_pods.go:61] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:51:12.013141  353745 system_pods.go:74] duration metric: took 186.714262ms to wait for pod list to return data ...
	I0916 11:51:12.013150  353745 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:51:12.209497  353745 default_sa.go:45] found service account: "default"
	I0916 11:51:12.209523  353745 default_sa.go:55] duration metric: took 196.365905ms for default service account to be created ...
	I0916 11:51:12.209532  353745 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:51:12.411009  353745 system_pods.go:86] 8 kube-system pods found
	I0916 11:51:12.411045  353745 system_pods.go:89] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:51:12.411056  353745 system_pods.go:89] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:51:12.411063  353745 system_pods.go:89] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:51:12.411069  353745 system_pods.go:89] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:51:12.411075  353745 system_pods.go:89] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:51:12.411080  353745 system_pods.go:89] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:51:12.411085  353745 system_pods.go:89] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:51:12.411090  353745 system_pods.go:89] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:51:12.411104  353745 system_pods.go:126] duration metric: took 201.565069ms to wait for k8s-apps to be running ...
	I0916 11:51:12.411116  353745 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:51:12.411160  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:51:12.422546  353745 system_svc.go:56] duration metric: took 11.421673ms WaitForService to wait for kubelet
	I0916 11:51:12.422583  353745 kubeadm.go:582] duration metric: took 19.402550835s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:51:12.422611  353745 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:51:12.609131  353745 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:51:12.609166  353745 node_conditions.go:123] node cpu capacity is 8
	I0916 11:51:12.609185  353745 node_conditions.go:105] duration metric: took 186.568247ms to run NodePressure ...
	I0916 11:51:12.609200  353745 start.go:241] waiting for startup goroutines ...
	I0916 11:51:12.609211  353745 start.go:246] waiting for cluster config update ...
	I0916 11:51:12.609225  353745 start.go:255] writing updated cluster config ...
	I0916 11:51:12.659042  353745 ssh_runner.go:195] Run: rm -f paused
	I0916 11:51:12.751470  353745 out.go:177] * Done! kubectl is now configured to use "no-preload-179932" cluster and "default" namespace by default
	E0916 11:51:12.791894  353745 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.679711273Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=038565d4-4b28-4f92-9f5c-1f345ce69cae name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.681554773Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-sfxnk Namespace:kube-system ID:9b913c18240cf0e8dd7d375145b81c674010cafd0f8eb5bf5fb483007b2b3943 UID:ec2c3f40-5323-4dce-ae07-29c4537f3067 NetNS:/var/run/netns/5e4fb530-c158-4e60-887b-fdcc17b17070 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.681861332Z" level=info msg="Checking pod kube-system_coredns-7c65d6cfc9-sfxnk for CNI network kindnet (type=ptp)"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.682599633Z" level=info msg="Ran pod sandbox 12785168d30bd14a1cc2dc6399b74aa1137f3ce5f50dbac8ec101d017e6338ac with infra container: kube-system/storage-provisioner/POD" id=038565d4-4b28-4f92-9f5c-1f345ce69cae name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.683730445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=520286a6-6890-419a-a6e2-7f8515d87263 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.683936535Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=520286a6-6890-419a-a6e2-7f8515d87263 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.684321166Z" level=info msg="Ran pod sandbox 9b913c18240cf0e8dd7d375145b81c674010cafd0f8eb5bf5fb483007b2b3943 with infra container: kube-system/coredns-7c65d6cfc9-sfxnk/POD" id=4cc1ac77-30aa-421a-8455-1424a48b7b45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.684632672Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=add2cd53-f92d-44d3-8e9e-f4a80f587174 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.684867835Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=add2cd53-f92d-44d3-8e9e-f4a80f587174 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685233945Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=9b827247-ab9f-48bf-b03b-b2d945994edf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685445748Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:bb97ed7cb2429a420726fbc329199f4600f59ea307bf93745052a9dd7e3f9955],Size_:63269914,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=9b827247-ab9f-48bf-b03b-b2d945994edf name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685547596Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=863009b1-e288-4943-af1a-62501e05710f name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.685634399Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686014530Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=8fa9e99f-9f75-4dba-92e6-a499f81e7d6e name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686178090Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:bb97ed7cb2429a420726fbc329199f4600f59ea307bf93745052a9dd7e3f9955],Size_:63269914,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8fa9e99f-9f75-4dba-92e6-a499f81e7d6e name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686818980Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-sfxnk/coredns" id=4f48f1dd-9fe1-44ee-bc6a-ca92014a90e9 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686894528Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.695539343Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7d28120e820275943bf61bbc418c5d626d58de7cf91c37ff58a8d3f09511b328/merged/etc/passwd: no such file or directory"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.695574739Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7d28120e820275943bf61bbc418c5d626d58de7cf91c37ff58a8d3f09511b328/merged/etc/group: no such file or directory"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.732731855Z" level=info msg="Created container 319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c: kube-system/storage-provisioner/storage-provisioner" id=863009b1-e288-4943-af1a-62501e05710f name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.733590659Z" level=info msg="Starting container: 319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c" id=413ac176-bf5d-4bdb-85b8-9aee1826b477 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.740503723Z" level=info msg="Started container" PID=3240 containerID=319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c description=kube-system/storage-provisioner/storage-provisioner id=413ac176-bf5d-4bdb-85b8-9aee1826b477 name=/runtime.v1.RuntimeService/StartContainer sandboxID=12785168d30bd14a1cc2dc6399b74aa1137f3ce5f50dbac8ec101d017e6338ac
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.744459532Z" level=info msg="Created container 1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac: kube-system/coredns-7c65d6cfc9-sfxnk/coredns" id=4f48f1dd-9fe1-44ee-bc6a-ca92014a90e9 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.745066701Z" level=info msg="Starting container: 1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac" id=dd10f8cb-ef54-4a2d-9029-849d6f82fa90 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.751399545Z" level=info msg="Started container" PID=3255 containerID=1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac description=kube-system/coredns-7c65d6cfc9-sfxnk/coredns id=dd10f8cb-ef54-4a2d-9029-849d6f82fa90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b913c18240cf0e8dd7d375145b81c674010cafd0f8eb5bf5fb483007b2b3943
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a534bc0b815b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     4 seconds ago       Running             coredns                   0                   9b913c18240cf       coredns-7c65d6cfc9-sfxnk
	319ec20c27cc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     4 seconds ago       Running             storage-provisioner       0                   12785168d30bd       storage-provisioner
	4d6a1ab5026f1       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   16 seconds ago      Running             kindnet-cni               0                   c69d7a8de2d53       kindnet-2678b
	589063428fb28       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                     20 seconds ago      Running             kube-proxy                0                   c69cfe3f95afb       kube-proxy-ckd46
	4a9a8c6b23212       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                     32 seconds ago      Running             kube-controller-manager   0                   12f1b77dcc6a5       kube-controller-manager-no-preload-179932
	6aec60ed07214       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                     32 seconds ago      Running             kube-scheduler            0                   c99d8af113358       kube-scheduler-no-preload-179932
	8d5a1ec60515c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     32 seconds ago      Running             etcd                      0                   36eff604d6002       etcd-no-preload-179932
	3a0b6ce23d737       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                     32 seconds ago      Running             kube-apiserver            0                   e61434917d78a       kube-apiserver-no-preload-179932
	
	
	==> coredns [1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52027 - 34155 "HINFO IN 7043137295982352462.1682836216271367565. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011352241s
	
	
	==> describe nodes <==
	Name:               no-preload-179932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-179932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-179932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_50_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:50:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-179932
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:51:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:51:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-179932
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2b5d727e19a44ae98155858b9a8e152
	  System UUID:                93f9cbba-c2f8-4376-ab54-e687ad96b58b
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sfxnk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-no-preload-179932                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-2678b                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-no-preload-179932             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-no-preload-179932    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-ckd46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-no-preload-179932             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 20s   kube-proxy       
	  Normal   Starting                 28s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 28s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  27s   kubelet          Node no-preload-179932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27s   kubelet          Node no-preload-179932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27s   kubelet          Node no-preload-179932 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           23s   node-controller  Node no-preload-179932 event: Registered Node no-preload-179932 in Controller
	  Normal   NodeReady                5s    kubelet          Node no-preload-179932 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [8d5a1ec60515c3d2cf2ca04cb04d81bb6e475fd0facec6605bc2f2857dca90f5] <==
	{"level":"info","ts":"2024-09-16T11:50:43.301859Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:50:43.302121Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:50:43.302157Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:50:43.302244Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:50:43.302271Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:50:43.828178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.829531Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.829807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:50:43.829807Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-179932 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:50:43.829838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:50:43.830143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:50:43.830179Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:50:43.830312Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.830401Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.830433Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.831241Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:50:43.831311Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:50:43.832100Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T11:50:43.832209Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:51:15 up  1:33,  0 users,  load average: 1.27, 0.87, 0.83
	Linux no-preload-179932 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4d6a1ab5026f16f7b6b74929edce565d1b79109723753135d31aaf14d219b7b2] <==
	I0916 11:50:59.494689       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:50:59.494926       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0916 11:50:59.495223       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:50:59.495243       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:50:59.495259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:50:59.893976       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:50:59.893995       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:50:59.894000       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:51:00.094309       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:51:00.094342       1 metrics.go:61] Registering metrics
	I0916 11:51:00.094412       1 controller.go:374] Syncing nftables rules
	I0916 11:51:09.898148       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:51:09.898215       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3a0b6ce23d7370d3f0843ffa20a8f351fadb19d104cdb3b6c793368ecae40e03] <==
	I0916 11:50:45.320269       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:50:45.320285       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:50:45.320376       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:50:45.320409       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:50:45.320417       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:50:45.320424       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:50:45.320429       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:50:45.320609       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:50:45.321396       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:50:45.393816       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:50:46.224206       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:50:46.229372       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:50:46.229392       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:50:46.702715       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:50:46.742521       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:50:46.830384       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:50:46.836946       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:50:46.838174       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:50:46.842062       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:50:47.301396       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:50:48.037176       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:50:48.045066       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:50:48.053110       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:50:52.653935       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:50:53.055171       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4a9a8c6b232126b3a3f834266ab09739227dd047f65a57809b27690d13071f64] <==
	I0916 11:50:52.251935       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 11:50:52.256873       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:50:52.258059       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:50:52.302452       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 11:50:52.302471       1 shared_informer.go:320] Caches are synced for expand
	I0916 11:50:52.302534       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 11:50:52.668325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:50:52.701193       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:50:52.701224       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:50:53.010133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:50:53.220087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="562.02849ms"
	I0916 11:50:53.228924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.785211ms"
	I0916 11:50:53.229036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.151µs"
	I0916 11:50:53.229283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.06µs"
	I0916 11:50:53.642413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.624ms"
	I0916 11:50:53.693844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.377531ms"
	I0916 11:50:53.694092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.73µs"
	I0916 11:51:10.344634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:10.352621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:10.357500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.304µs"
	I0916 11:51:10.378868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="91.565µs"
	I0916 11:51:11.071302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.849603ms"
	I0916 11:51:11.071412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.462µs"
	I0916 11:51:12.024328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:12.024346       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [589063428fb28a5c87aad20f178d6bbf4342f3d4061b3649c5a14a2f2612be36] <==
	I0916 11:50:55.310468       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:50:55.424560       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:50:55.424623       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:50:55.443689       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:50:55.443760       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:50:55.445611       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:50:55.446002       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:50:55.446028       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:50:55.447139       1 config.go:328] "Starting node config controller"
	I0916 11:50:55.447220       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:50:55.447186       1 config.go:199] "Starting service config controller"
	I0916 11:50:55.447262       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:50:55.447132       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:50:55.447289       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:50:55.547414       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:50:55.547439       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:50:55.547449       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6aec60ed072148d0a4ddf5d94e307f15b744a472ca2e73827876970e20146006] <==
	W0916 11:50:45.313624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0916 11:50:45.313521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.313641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 11:50:45.313653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.313693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:50:45.313744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:50:45.313988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.314171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.314196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.120634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:46.120679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.331027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:50:46.331070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.353666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:50:46.353722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.411296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:50:46.411335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.429940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:50:46.429988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.458442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:50:46.458490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 11:50:46.910182       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294366    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c024fac-4113-4c1b-8b50-3e066e7b9b67-lib-modules\") pod \"kube-proxy-ckd46\" (UID: \"2c024fac-4113-4c1b-8b50-3e066e7b9b67\") " pod="kube-system/kube-proxy-ckd46"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294390    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-cni-cfg\") pod \"kindnet-2678b\" (UID: \"28d0afc4-03fd-4b6e-8ced-8b440d6153ff\") " pod="kube-system/kindnet-2678b"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294658    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-xtables-lock\") pod \"kindnet-2678b\" (UID: \"28d0afc4-03fd-4b6e-8ced-8b440d6153ff\") " pod="kube-system/kindnet-2678b"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294702    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltv87\" (UniqueName: \"kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87\") pod \"kube-proxy-ckd46\" (UID: \"2c024fac-4113-4c1b-8b50-3e066e7b9b67\") " pod="kube-system/kube-proxy-ckd46"
	Sep 16 11:50:53 no-preload-179932 kubelet[2607]: I0916 11:50:53.294726    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-lib-modules\") pod \"kindnet-2678b\" (UID: \"28d0afc4-03fd-4b6e-8ced-8b440d6153ff\") " pod="kube-system/kindnet-2678b"
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.402869    2607 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.402933    2607 projected.go:194] Error preparing data for projected volume kube-api-access-ltv87 for pod kube-system/kube-proxy-ckd46: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.402869    2607 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403023    2607 projected.go:194] Error preparing data for projected volume kube-api-access-mpmnk for pod kube-system/kindnet-2678b: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403029    2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87 podName:2c024fac-4113-4c1b-8b50-3e066e7b9b67 nodeName:}" failed. No retries permitted until 2024-09-16 11:50:54.902996386 +0000 UTC m=+7.050915339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltv87" (UniqueName: "kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87") pod "kube-proxy-ckd46" (UID: "2c024fac-4113-4c1b-8b50-3e066e7b9b67") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403062    2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-kube-api-access-mpmnk podName:28d0afc4-03fd-4b6e-8ced-8b440d6153ff nodeName:}" failed. No retries permitted until 2024-09-16 11:50:54.903050076 +0000 UTC m=+7.050969022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mpmnk" (UniqueName: "kubernetes.io/projected/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-kube-api-access-mpmnk") pod "kindnet-2678b" (UID: "28d0afc4-03fd-4b6e-8ced-8b440d6153ff") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: I0916 11:50:54.905280    2607 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:50:56 no-preload-179932 kubelet[2607]: I0916 11:50:56.027618    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ckd46" podStartSLOduration=3.027598211 podStartE2EDuration="3.027598211s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:50:56.027415248 +0000 UTC m=+8.175334204" watchObservedRunningTime="2024-09-16 11:50:56.027598211 +0000 UTC m=+8.175517164"
	Sep 16 11:50:58 no-preload-179932 kubelet[2607]: E0916 11:50:58.016042    2607 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487458015815892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92080,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:50:58 no-preload-179932 kubelet[2607]: E0916 11:50:58.016091    2607 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487458015815892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92080,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:02 no-preload-179932 kubelet[2607]: I0916 11:51:02.695100    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2678b" podStartSLOduration=5.586249323 podStartE2EDuration="9.695079637s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="2024-09-16 11:50:55.227033013 +0000 UTC m=+7.374951948" lastFinishedPulling="2024-09-16 11:50:59.335863327 +0000 UTC m=+11.483782262" observedRunningTime="2024-09-16 11:51:00.036700007 +0000 UTC m=+12.184618972" watchObservedRunningTime="2024-09-16 11:51:02.695079637 +0000 UTC m=+14.842998616"
	Sep 16 11:51:08 no-preload-179932 kubelet[2607]: E0916 11:51:08.017291    2607 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487468017096773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:08 no-preload-179932 kubelet[2607]: E0916 11:51:08.017367    2607 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487468017096773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.337301    2607 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518235    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qdf\" (UniqueName: \"kubernetes.io/projected/ec2c3f40-5323-4dce-ae07-29c4537f3067-kube-api-access-24qdf\") pod \"coredns-7c65d6cfc9-sfxnk\" (UID: \"ec2c3f40-5323-4dce-ae07-29c4537f3067\") " pod="kube-system/coredns-7c65d6cfc9-sfxnk"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518285    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdhnp\" (UniqueName: \"kubernetes.io/projected/040e8794-ddea-4f91-b709-cb999b3c71d5-kube-api-access-tdhnp\") pod \"storage-provisioner\" (UID: \"040e8794-ddea-4f91-b709-cb999b3c71d5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518302    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec2c3f40-5323-4dce-ae07-29c4537f3067-config-volume\") pod \"coredns-7c65d6cfc9-sfxnk\" (UID: \"ec2c3f40-5323-4dce-ae07-29c4537f3067\") " pod="kube-system/coredns-7c65d6cfc9-sfxnk"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518330    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/040e8794-ddea-4f91-b709-cb999b3c71d5-tmp\") pod \"storage-provisioner\" (UID: \"040e8794-ddea-4f91-b709-cb999b3c71d5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:51:11 no-preload-179932 kubelet[2607]: I0916 11:51:11.055193    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=18.055168777 podStartE2EDuration="18.055168777s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:51:11.055129269 +0000 UTC m=+23.203048223" watchObservedRunningTime="2024-09-16 11:51:11.055168777 +0000 UTC m=+23.203087726"
	Sep 16 11:51:11 no-preload-179932 kubelet[2607]: I0916 11:51:11.065541    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sfxnk" podStartSLOduration=18.06551962 podStartE2EDuration="18.06551962s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:51:11.065119525 +0000 UTC m=+23.213038480" watchObservedRunningTime="2024-09-16 11:51:11.06551962 +0000 UTC m=+23.213438552"
	
	
	==> storage-provisioner [319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c] <==
	I0916 11:51:10.752747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:51:10.762574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:51:10.762667       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:51:10.798892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:51:10.799029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6492543-a96c-4e35-8fc0-19e6c7bc9c6d", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9 became leader
	I0916 11:51:10.799116       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9!
	I0916 11:51:10.899335       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (564.686µs)
helpers_test.go:263: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-179932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-179932 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-179932 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (539.576µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-179932 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-179932
helpers_test.go:235: (dbg) docker inspect no-preload-179932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db",
	        "Created": "2024-09-16T11:50:18.324141753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354317,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:50:18.460923195Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hostname",
	        "HostsPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hosts",
	        "LogPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db-json.log",
	        "Name": "/no-preload-179932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-179932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-179932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-179932",
	                "Source": "/var/lib/docker/volumes/no-preload-179932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-179932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-179932",
	                "name.minikube.sigs.k8s.io": "no-preload-179932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7cd51b56ae0e7b9c36d315b4ce9fb777c38e910770cfb5f1f448c928dadda05",
	            "SandboxKey": "/var/run/docker/netns/a7cd51b56ae0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-179932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3318c5c795cbdaf6a4546ff9f05fc1f3534565776857632d9afa204a3c5ca91f",
	                    "EndpointID": "1762fc6325de440c55f237e57f8ef1680b848810c568c35778055aedb3d79112",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-179932",
	                        "33415cb7fa83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-179932 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-179932 logs -n 25: (1.100128623s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cri-dockerd --version                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC |                     |
	|         | sudo systemctl status                                  |                              |         |         |                     |                     |
	|         | containerd --all --full                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat containerd                          |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467        | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-179932             | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:50:17
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:50:17.261646  353745 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:50:17.261961  353745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:50:17.261974  353745 out.go:358] Setting ErrFile to fd 2...
	I0916 11:50:17.261981  353745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:50:17.262273  353745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:50:17.263118  353745 out.go:352] Setting JSON to false
	I0916 11:50:17.264280  353745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5557,"bootTime":1726481860,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:50:17.264369  353745 start.go:139] virtualization: kvm guest
	I0916 11:50:17.267026  353745 out.go:177] * [no-preload-179932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:50:17.268879  353745 notify.go:220] Checking for updates...
	I0916 11:50:17.268946  353745 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:50:17.270731  353745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:50:17.272238  353745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:50:17.273551  353745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:50:17.275161  353745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:50:17.276866  353745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:50:17.279205  353745 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279359  353745 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279497  353745 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:17.279614  353745 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:50:17.307569  353745 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:50:17.307662  353745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:50:17.364583  353745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:50:17.353613217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:50:17.364687  353745 docker.go:318] overlay module found
	I0916 11:50:17.367827  353745 out.go:177] * Using the docker driver based on user configuration
	I0916 11:50:17.369319  353745 start.go:297] selected driver: docker
	I0916 11:50:17.369364  353745 start.go:901] validating driver "docker" against <nil>
	I0916 11:50:17.369380  353745 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:50:17.370517  353745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:50:17.426383  353745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:50:17.415784753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:50:17.426604  353745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:50:17.426824  353745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:50:17.428784  353745 out.go:177] * Using Docker driver with root privileges
	I0916 11:50:17.430291  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:17.430351  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:17.430360  353745 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:50:17.430422  353745 start.go:340] cluster config:
	{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:50:17.432336  353745 out.go:177] * Starting "no-preload-179932" primary control-plane node in "no-preload-179932" cluster
	I0916 11:50:17.434034  353745 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:50:17.435683  353745 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:50:17.436991  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:50:17.437122  353745 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:50:17.437157  353745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:50:17.437183  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json: {Name:mkc16156d5a07d416da64f9d96a3502b09dcbb6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:17.437384  353745 cache.go:107] acquiring lock: {Name:mk871ae736ce09ba2b4421598649b9ecfc9a98bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437387  353745 cache.go:107] acquiring lock: {Name:mk8b23bbceb92ce965299065ca3d25050387467b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437413  353745 cache.go:107] acquiring lock: {Name:mk0d227841b16d1443985320c46c5945df5de856 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437384  353745 cache.go:107] acquiring lock: {Name:mkc9fa4e48807b59cdf7eefb19d5245546dc831d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437456  353745 cache.go:107] acquiring lock: {Name:mkf3f21a53f01d1ee0608b28c94cf582dc8c355f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437403  353745 cache.go:107] acquiring lock: {Name:mk540470437675d9c95f2acaf015b6015148e24f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437530  353745 cache.go:107] acquiring lock: {Name:mkbb0d7522afd30851ddf834442136fb3567a26a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437558  353745 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:50:17.437616  353745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:17.437629  353745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:17.437676  353745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:17.437698  353745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:17.437787  353745 cache.go:107] acquiring lock: {Name:mkfcf90f9df5885fe87d6ff86cdb7f8f58dec344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.437843  353745 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:50:17.437856  353745 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 477.041µs
	I0916 11:50:17.437874  353745 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:50:17.437894  353745 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:17.437975  353745 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:17.439129  353745 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:50:17.439139  353745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:17.439178  353745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:17.439228  353745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:17.439303  353745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:17.439442  353745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:17.439509  353745 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	W0916 11:50:17.465435  353745 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:50:17.465457  353745 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:50:17.465523  353745 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:50:17.465535  353745 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:50:17.465539  353745 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:50:17.465546  353745 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:50:17.465551  353745 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:50:17.540421  353745 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:50:17.540482  353745 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:50:17.540523  353745 start.go:360] acquireMachinesLock for no-preload-179932: {Name:mkd475c3f7aed9017143023aeb4fceb62fe6c60d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:50:17.540666  353745 start.go:364] duration metric: took 116.626µs to acquireMachinesLock for "no-preload-179932"
	I0916 11:50:17.540697  353745 start.go:93] Provisioning new machine with config: &{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:50:17.540799  353745 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:50:17.543760  353745 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:50:17.544066  353745 start.go:159] libmachine.API.Create for "no-preload-179932" (driver="docker")
	I0916 11:50:17.544097  353745 client.go:168] LocalClient.Create starting
	I0916 11:50:17.544177  353745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:50:17.544211  353745 main.go:141] libmachine: Decoding PEM data...
	I0916 11:50:17.544230  353745 main.go:141] libmachine: Parsing certificate...
	I0916 11:50:17.544292  353745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:50:17.544320  353745 main.go:141] libmachine: Decoding PEM data...
	I0916 11:50:17.544336  353745 main.go:141] libmachine: Parsing certificate...
	I0916 11:50:17.544768  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:50:17.563971  353745 cli_runner.go:211] docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:50:17.564043  353745 network_create.go:284] running [docker network inspect no-preload-179932] to gather additional debugging logs...
	I0916 11:50:17.564060  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932
	W0916 11:50:17.581522  353745 cli_runner.go:211] docker network inspect no-preload-179932 returned with exit code 1
	I0916 11:50:17.581552  353745 network_create.go:287] error running [docker network inspect no-preload-179932]: docker network inspect no-preload-179932: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-179932 not found
	I0916 11:50:17.581569  353745 network_create.go:289] output of [docker network inspect no-preload-179932]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-179932 not found
	
	** /stderr **
	I0916 11:50:17.581662  353745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:50:17.600809  353745 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:50:17.601729  353745 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:50:17.602523  353745 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:50:17.603150  353745 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:50:17.603787  353745 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:50:17.604419  353745 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:50:17.605797  353745 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039cfe0}
	I0916 11:50:17.605828  353745 network_create.go:124] attempt to create docker network no-preload-179932 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:50:17.605872  353745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-179932 no-preload-179932
	I0916 11:50:17.676431  353745 network_create.go:108] docker network no-preload-179932 192.168.103.0/24 created
	I0916 11:50:17.676472  353745 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-179932" container
	I0916 11:50:17.676527  353745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:50:17.695151  353745 cli_runner.go:164] Run: docker volume create no-preload-179932 --label name.minikube.sigs.k8s.io=no-preload-179932 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:50:17.716208  353745 oci.go:103] Successfully created a docker volume no-preload-179932
	I0916 11:50:17.716280  353745 cli_runner.go:164] Run: docker run --rm --name no-preload-179932-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-179932 --entrypoint /usr/bin/test -v no-preload-179932:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:50:17.982139  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:50:18.004879  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:50:18.032231  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:50:18.062798  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:50:18.064953  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:50:18.071480  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:50:18.072209  353745 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:50:18.157840  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:50:18.157871  353745 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 720.488492ms
	I0916 11:50:18.157891  353745 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:50:18.244108  353745 oci.go:107] Successfully prepared a docker volume no-preload-179932
	I0916 11:50:18.244138  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	W0916 11:50:18.244297  353745 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:50:18.244412  353745 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:50:18.303137  353745 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-179932 --name no-preload-179932 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-179932 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-179932 --network no-preload-179932 --ip 192.168.103.2 --volume no-preload-179932:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:50:18.643596  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Running}}
	I0916 11:50:18.667792  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.688027  353745 cli_runner.go:164] Run: docker exec no-preload-179932 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:50:18.735261  353745 oci.go:144] the created container "no-preload-179932" has a running status.
	I0916 11:50:18.735326  353745 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa...
	I0916 11:50:18.766733  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:50:18.766766  353745 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 1.329386554s
	I0916 11:50:18.766783  353745 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:50:18.853467  353745 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:50:18.875421  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.894347  353745 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:50:18.894368  353745 kic_runner.go:114] Args: [docker exec --privileged no-preload-179932 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:50:18.942980  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:18.964524  353745 machine.go:93] provisionDockerMachine start ...
	I0916 11:50:18.964628  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:18.985177  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:18.985626  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:18.985648  353745 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:50:18.986437  353745 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52304->127.0.0.1:33098: read: connection reset by peer
	I0916 11:50:20.352937  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:50:20.352965  353745 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.91554704s
	I0916 11:50:20.352978  353745 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:50:20.375094  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:50:20.375146  353745 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.93769009s
	I0916 11:50:20.375162  353745 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:50:20.404338  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:50:20.404368  353745 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 2.967049618s
	I0916 11:50:20.404383  353745 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:50:20.440630  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:50:20.440662  353745 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.002881935s
	I0916 11:50:20.440675  353745 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:50:20.758418  353745 cache.go:157] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:50:20.758445  353745 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 3.321045606s
	I0916 11:50:20.758457  353745 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:50:20.758473  353745 cache.go:87] Successfully saved all images to host disk.
	I0916 11:50:22.121000  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:50:22.121029  353745 ubuntu.go:169] provisioning hostname "no-preload-179932"
	I0916 11:50:22.121084  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.139064  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.139265  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.139281  353745 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-179932 && echo "no-preload-179932" | sudo tee /etc/hostname
	I0916 11:50:22.285481  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:50:22.285587  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.303430  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.303635  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.303653  353745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-179932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-179932/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-179932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:50:22.441654  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:50:22.441687  353745 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:50:22.441713  353745 ubuntu.go:177] setting up certificates
	I0916 11:50:22.441726  353745 provision.go:84] configureAuth start
	I0916 11:50:22.441784  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:22.459186  353745 provision.go:143] copyHostCerts
	I0916 11:50:22.459247  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:50:22.459254  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:50:22.459318  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:50:22.459401  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:50:22.459412  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:50:22.459436  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:50:22.459501  353745 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:50:22.459509  353745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:50:22.459529  353745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:50:22.459579  353745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.no-preload-179932 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-179932]
	I0916 11:50:22.604596  353745 provision.go:177] copyRemoteCerts
	I0916 11:50:22.604661  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:50:22.604696  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.623335  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:22.722150  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:50:22.744937  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:50:22.767660  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:50:22.790813  353745 provision.go:87] duration metric: took 349.073566ms to configureAuth
	I0916 11:50:22.790843  353745 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:50:22.791022  353745 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:22.791130  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:22.809366  353745 main.go:141] libmachine: Using SSH client type: native
	I0916 11:50:22.809570  353745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0916 11:50:22.809594  353745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:50:23.037925  353745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:50:23.037948  353745 machine.go:96] duration metric: took 4.073399787s to provisionDockerMachine
	I0916 11:50:23.037960  353745 client.go:171] duration metric: took 5.493852423s to LocalClient.Create
	I0916 11:50:23.037983  353745 start.go:167] duration metric: took 5.493918053s to libmachine.API.Create "no-preload-179932"
	I0916 11:50:23.037991  353745 start.go:293] postStartSetup for "no-preload-179932" (driver="docker")
	I0916 11:50:23.038043  353745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:50:23.038130  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:50:23.038173  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.057110  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.155780  353745 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:50:23.158999  353745 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:50:23.159029  353745 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:50:23.159036  353745 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:50:23.159042  353745 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:50:23.159052  353745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:50:23.159108  353745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:50:23.159178  353745 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:50:23.159265  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:50:23.168631  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:50:23.191792  353745 start.go:296] duration metric: took 153.784247ms for postStartSetup
	I0916 11:50:23.192189  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:23.210469  353745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:50:23.210780  353745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:50:23.210826  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.228693  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.322250  353745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:50:23.326606  353745 start.go:128] duration metric: took 5.78575133s to createHost
	I0916 11:50:23.326630  353745 start.go:83] releasing machines lock for "no-preload-179932", held for 5.785949248s
	I0916 11:50:23.326688  353745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:50:23.345016  353745 ssh_runner.go:195] Run: cat /version.json
	I0916 11:50:23.345063  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.345140  353745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:50:23.345213  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:23.364213  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.365476  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:23.539384  353745 ssh_runner.go:195] Run: systemctl --version
	I0916 11:50:23.544045  353745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:50:23.682500  353745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:50:23.686822  353745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:50:23.705505  353745 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:50:23.705596  353745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:50:23.735375  353745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:50:23.735406  353745 start.go:495] detecting cgroup driver to use...
	I0916 11:50:23.735443  353745 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:50:23.735487  353745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:50:23.751165  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:50:23.762367  353745 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:50:23.762424  353745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:50:23.776422  353745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:50:23.790314  353745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:50:23.871070  353745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:50:23.955641  353745 docker.go:233] disabling docker service ...
	I0916 11:50:23.955704  353745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:50:23.974798  353745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:50:23.986320  353745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:50:24.066055  353745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:50:24.154083  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:50:24.165011  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:50:24.180586  353745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:50:24.180688  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.189971  353745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:50:24.190024  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.199843  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.209792  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.219702  353745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:50:24.228365  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.237703  353745 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.252615  353745 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:50:24.261804  353745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:50:24.269676  353745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:50:24.278212  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:24.351610  353745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:50:24.760310  353745 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:50:24.760392  353745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:50:24.763747  353745 start.go:563] Will wait 60s for crictl version
	I0916 11:50:24.763819  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:24.767047  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:50:24.799325  353745 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:50:24.799407  353745 ssh_runner.go:195] Run: crio --version
	I0916 11:50:24.833821  353745 ssh_runner.go:195] Run: crio --version
	I0916 11:50:24.872021  353745 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:50:24.873644  353745 cli_runner.go:164] Run: docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:50:24.890696  353745 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:50:24.894309  353745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:50:24.905242  353745 kubeadm.go:883] updating cluster {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:50:24.905402  353745 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:50:24.905459  353745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:50:24.938604  353745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:50:24.938629  353745 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:50:24.938703  353745 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:24.938734  353745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:24.938778  353745 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:24.938807  353745 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:50:24.938828  353745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:24.938854  353745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:24.938794  353745 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:24.938984  353745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:24.939961  353745 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:24.939978  353745 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:24.940164  353745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:24.940207  353745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:24.940241  353745 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:50:24.940248  353745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:24.940172  353745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:24.940170  353745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.118753  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.154474  353745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0916 11:50:25.154512  353745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.154548  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.157855  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.162753  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.167885  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.174842  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.177553  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0916 11:50:25.199771  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.199957  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.270508  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.296799  353745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0916 11:50:25.296844  353745 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0916 11:50:25.296908  353745 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.296933  353745 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0916 11:50:25.296853  353745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.296965  353745 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.296980  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.296993  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.297001  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.297054  353745 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0916 11:50:25.297079  353745 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0916 11:50:25.297108  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.320461  353745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0916 11:50:25.320506  353745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.320553  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.320578  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:50:25.333783  353745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0916 11:50:25.333833  353745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.333854  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.333872  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.333870  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:25.333904  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.333948  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.333962  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.414304  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:50:25.414412  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:25.504551  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.504652  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.504665  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.504697  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.504743  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.504760  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.504802  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.1': No such file or directory
	I0916 11:50:25.504831  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 --> /var/lib/minikube/images/kube-scheduler_v1.31.1 (20187136 bytes)
	I0916 11:50:25.715489  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:50:25.715508  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:50:25.715538  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:50:25.715600  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:50:25.715604  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:50:25.715659  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.913649  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:50:25.913683  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:50:25.913700  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:50:25.913708  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:50:25.913757  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0916 11:50:25.913757  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:50:25.913785  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:25.913799  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:25.913659  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:50:25.913838  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:25.913889  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928792  353745 retry.go:31] will retry after 284.043253ms: ssh: rejected: connect failed (open failed)
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928820  353745 retry.go:31] will retry after 206.277714ms: ssh: rejected: connect failed (open failed)
	W0916 11:50:25.928748  353745 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.928832  353745 retry.go:31] will retry after 258.129273ms: ssh: rejected: connect failed (open failed)
	I0916 11:50:25.955883  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.1': No such file or directory
	I0916 11:50:25.955923  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 --> /var/lib/minikube/images/kube-proxy_v1.31.1 (30214144 bytes)
	I0916 11:50:25.955990  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:25.955998  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I0916 11:50:25.956027  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I0916 11:50:25.956080  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:25.979690  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:25.980957  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.009367  353745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:26.009427  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.015683  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:50:26.015784  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:26.015850  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.020816  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:26.020879  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:50:26.020938  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:26.035542  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.037133  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.041968  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:26.219884  353745 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:50:26.219941  353745 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:26.219994  353745 ssh_runner.go:195] Run: which crictl
	I0916 11:50:26.219941  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.1': No such file or directory
	I0916 11:50:26.220069  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 --> /var/lib/minikube/images/kube-controller-manager_v1.31.1 (26231808 bytes)
	I0916 11:50:28.111335  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.090425901s)
	I0916 11:50:28.111372  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0916 11:50:28.111392  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:28.111394  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.197583966s)
	I0916 11:50:28.111426  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0916 11:50:28.111436  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:50:28.111440  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: (2.197664353s)
	I0916 11:50:28.111456  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0916 11:50:28.111476  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0916 11:50:28.111454  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0916 11:50:28.111523  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.197610351s)
	I0916 11:50:28.111565  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.1': No such file or directory
	I0916 11:50:28.111596  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 --> /var/lib/minikube/images/kube-apiserver_v1.31.1 (28057088 bytes)
	I0916 11:50:28.111571  353745 ssh_runner.go:235] Completed: which crictl: (1.891560983s)
	I0916 11:50:28.111720  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:29.915246  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.803785881s)
	I0916 11:50:29.915276  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0916 11:50:29.915301  353745 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:29.915321  353745 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.803577324s)
	I0916 11:50:29.915347  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:50:29.915396  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:32.399830  353745 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.48440876s)
	I0916 11:50:32.399928  353745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:32.399839  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (2.484470985s)
	I0916 11:50:32.399960  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0916 11:50:32.399988  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:32.400032  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:50:32.436189  353745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:50:32.436293  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:33.746085  353745 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.309767608s)
	I0916 11:50:33.746123  353745 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:50:33.746085  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.346024308s)
	I0916 11:50:33.746143  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:50:33.746147  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0916 11:50:33.746168  353745 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0916 11:50:33.746219  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0916 11:50:33.886742  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0916 11:50:33.886791  353745 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:33.886847  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:50:35.329396  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.442524266s)
	I0916 11:50:35.329425  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0916 11:50:35.329448  353745 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:50:35.329494  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:50:36.770428  353745 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.440905892s)
	I0916 11:50:36.770458  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0916 11:50:36.770484  353745 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:36.770529  353745 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:50:37.409584  353745 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:50:37.409619  353745 cache_images.go:123] Successfully loaded all cached images
	I0916 11:50:37.409625  353745 cache_images.go:92] duration metric: took 12.470984002s to LoadCachedImages
	I0916 11:50:37.409637  353745 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 11:50:37.409719  353745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-179932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:50:37.409783  353745 ssh_runner.go:195] Run: crio config
	I0916 11:50:37.452066  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:37.452086  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:37.452097  353745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:50:37.452115  353745 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-179932 NodeName:no-preload-179932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:50:37.452287  353745 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-179932"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:50:37.452356  353745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:50:37.461638  353745 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:50:37.461710  353745 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:50:37.469780  353745 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:50:37.469859  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:50:37.469894  353745 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 11:50:37.469905  353745 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 11:50:37.473264  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:50:37.473298  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:50:38.361857  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:50:38.365559  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:50:38.365594  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:50:38.493699  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:50:38.508908  353745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:50:38.512321  353745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:50:38.512350  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:50:38.676578  353745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:50:38.685326  353745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0916 11:50:38.701489  353745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:50:38.718627  353745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0916 11:50:38.735122  353745 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:50:38.738342  353745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:50:38.748252  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:38.827198  353745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:50:38.840338  353745 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932 for IP: 192.168.103.2
	I0916 11:50:38.840364  353745 certs.go:194] generating shared ca certs ...
	I0916 11:50:38.840393  353745 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.840560  353745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:50:38.840615  353745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:50:38.840627  353745 certs.go:256] generating profile certs ...
	I0916 11:50:38.840704  353745 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key
	I0916 11:50:38.840723  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt with IP's: []
	I0916 11:50:38.935911  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt ...
	I0916 11:50:38.935940  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: {Name:mkcfebd0395ea27149b681830fddcbfa0b287805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.936111  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key ...
	I0916 11:50:38.936122  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key: {Name:mkedb064e2171125bc65687de4300740d0c5fa5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:38.936197  353745 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391
	I0916 11:50:38.936211  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:50:39.161110  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 ...
	I0916 11:50:39.161163  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391: {Name:mk6e55865c08038f9c83c62a1e3de8ab46e37505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.161381  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391 ...
	I0916 11:50:39.161403  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391: {Name:mk7fa07a5319463f001b0ea91f26d16d256d3f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.161513  353745 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt.a7025391 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt
	I0916 11:50:39.161622  353745 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key
	I0916 11:50:39.161703  353745 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key
	I0916 11:50:39.161726  353745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt with IP's: []
	I0916 11:50:39.230589  353745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt ...
	I0916 11:50:39.230621  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt: {Name:mk9382a33ca50c5dc46808284f9e12b01271ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.230825  353745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key ...
	I0916 11:50:39.230843  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key: {Name:mk92c148096f3309b2fe7cab24919949c9166c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:39.231071  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:50:39.231123  353745 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:50:39.231142  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:50:39.231171  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:50:39.231206  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:50:39.231238  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:50:39.231294  353745 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:50:39.231970  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:50:39.254719  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:50:39.277272  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:50:39.299028  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:50:39.321434  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:50:39.343976  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:50:39.367682  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:50:39.389857  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:50:39.411764  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:50:39.434314  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:50:39.455995  353745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:50:39.478225  353745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:50:39.493981  353745 ssh_runner.go:195] Run: openssl version
	I0916 11:50:39.498988  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:50:39.507998  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.511432  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.511491  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:50:39.518178  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:50:39.528049  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:50:39.538529  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.542466  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.542525  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:50:39.550361  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:50:39.559880  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:50:39.569042  353745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.572563  353745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.572616  353745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:50:39.578893  353745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:50:39.587606  353745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:50:39.590786  353745 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:50:39.590838  353745 kubeadm.go:392] StartCluster: {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:50:39.590919  353745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:50:39.590962  353745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:50:39.623993  353745 cri.go:89] found id: ""
	I0916 11:50:39.624065  353745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:50:39.632782  353745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:50:39.641165  353745 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:50:39.641220  353745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:50:39.649467  353745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:50:39.649485  353745 kubeadm.go:157] found existing configuration files:
	
	I0916 11:50:39.649526  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:50:39.657545  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:50:39.657603  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:50:39.665725  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:50:39.674189  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:50:39.674239  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:50:39.681997  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:50:39.690004  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:50:39.690062  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:50:39.697984  353745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:50:39.706536  353745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:50:39.706602  353745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:50:39.714682  353745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:50:39.749285  353745 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:50:39.749390  353745 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:50:39.766004  353745 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:50:39.766125  353745 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:50:39.766178  353745 kubeadm.go:310] OS: Linux
	I0916 11:50:39.766223  353745 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:50:39.766282  353745 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:50:39.766324  353745 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:50:39.766369  353745 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:50:39.766430  353745 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:50:39.766507  353745 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:50:39.766575  353745 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:50:39.766639  353745 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:50:39.766706  353745 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:50:39.816683  353745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:50:39.816778  353745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:50:39.816904  353745 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:50:39.829767  353745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:50:39.833943  353745 out.go:235]   - Generating certificates and keys ...
	I0916 11:50:39.834055  353745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:50:39.834121  353745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:50:39.912342  353745 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:50:39.981611  353745 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:50:40.100442  353745 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:50:40.353713  353745 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:50:40.529814  353745 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:50:40.529974  353745 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-179932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:50:40.662396  353745 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:50:40.662532  353745 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-179932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:50:40.978365  353745 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:50:41.089411  353745 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:50:41.246484  353745 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:50:41.246591  353745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:50:41.338255  353745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:50:41.520493  353745 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:50:41.631124  353745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:50:41.869980  353745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:50:42.120470  353745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:50:42.121129  353745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:50:42.123645  353745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:50:42.125750  353745 out.go:235]   - Booting up control plane ...
	I0916 11:50:42.125883  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:50:42.125983  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:50:42.126071  353745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:50:42.136142  353745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:50:42.141313  353745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:50:42.141405  353745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:50:42.219091  353745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:50:42.219242  353745 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:50:42.720318  353745 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.359131ms
	I0916 11:50:42.720396  353745 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:50:47.221859  353745 kubeadm.go:310] [api-check] The API server is healthy after 4.501530278s
	I0916 11:50:47.232717  353745 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:50:47.243418  353745 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:50:47.260829  353745 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:50:47.261089  353745 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-179932 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:50:47.268219  353745 kubeadm.go:310] [bootstrap-token] Using token: wbzbzb.swi91qeomz7323fx
	I0916 11:50:47.270698  353745 out.go:235]   - Configuring RBAC rules ...
	I0916 11:50:47.270836  353745 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:50:47.273506  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:50:47.279257  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:50:47.281945  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:50:47.284450  353745 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:50:47.288148  353745 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:50:47.628407  353745 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:50:48.046362  353745 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:50:48.628627  353745 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:50:48.629544  353745 kubeadm.go:310] 
	I0916 11:50:48.629646  353745 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:50:48.629658  353745 kubeadm.go:310] 
	I0916 11:50:48.629750  353745 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:50:48.629775  353745 kubeadm.go:310] 
	I0916 11:50:48.629834  353745 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:50:48.629927  353745 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:50:48.630007  353745 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:50:48.630018  353745 kubeadm.go:310] 
	I0916 11:50:48.630095  353745 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:50:48.630105  353745 kubeadm.go:310] 
	I0916 11:50:48.630171  353745 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:50:48.630180  353745 kubeadm.go:310] 
	I0916 11:50:48.630257  353745 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:50:48.630344  353745 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:50:48.630458  353745 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:50:48.630473  353745 kubeadm.go:310] 
	I0916 11:50:48.630589  353745 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:50:48.630728  353745 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:50:48.630737  353745 kubeadm.go:310] 
	I0916 11:50:48.630851  353745 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wbzbzb.swi91qeomz7323fx \
	I0916 11:50:48.631029  353745 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:50:48.631080  353745 kubeadm.go:310] 	--control-plane 
	I0916 11:50:48.631097  353745 kubeadm.go:310] 
	I0916 11:50:48.631194  353745 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:50:48.631209  353745 kubeadm.go:310] 
	I0916 11:50:48.631311  353745 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wbzbzb.swi91qeomz7323fx \
	I0916 11:50:48.631477  353745 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:50:48.632992  353745 kubeadm.go:310] W0916 11:50:39.746676    2273 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:50:48.633284  353745 kubeadm.go:310] W0916 11:50:39.747329    2273 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:50:48.633518  353745 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:50:48.633654  353745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:50:48.633668  353745 cni.go:84] Creating CNI manager for ""
	I0916 11:50:48.633678  353745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:50:48.636566  353745 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:50:48.638084  353745 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:50:48.642054  353745 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:50:48.642074  353745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:50:48.659841  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:50:48.854859  353745 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:50:48.854907  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:48.854934  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-179932 minikube.k8s.io/updated_at=2024_09_16T11_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=no-preload-179932 minikube.k8s.io/primary=true
	I0916 11:50:48.862914  353745 ops.go:34] apiserver oom_adj: -16
	I0916 11:50:48.947264  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:49.447477  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:49.948030  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:50.448348  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:50.947452  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:51.447333  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:51.947456  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:52.447460  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:52.948258  353745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:50:53.018251  353745 kubeadm.go:1113] duration metric: took 4.163399098s to wait for elevateKubeSystemPrivileges
	I0916 11:50:53.018293  353745 kubeadm.go:394] duration metric: took 13.427458529s to StartCluster
	I0916 11:50:53.018313  353745 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:53.018394  353745 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:50:53.019749  353745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:50:53.019996  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:50:53.020006  353745 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:50:53.020089  353745 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:50:53.020185  353745 addons.go:69] Setting storage-provisioner=true in profile "no-preload-179932"
	I0916 11:50:53.020206  353745 addons.go:69] Setting default-storageclass=true in profile "no-preload-179932"
	I0916 11:50:53.020229  353745 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-179932"
	I0916 11:50:53.020239  353745 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:50:53.020210  353745 addons.go:234] Setting addon storage-provisioner=true in "no-preload-179932"
	I0916 11:50:53.020316  353745 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:50:53.020631  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.020797  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.022059  353745 out.go:177] * Verifying Kubernetes components...
	I0916 11:50:53.023597  353745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:50:53.044946  353745 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:50:53.045147  353745 addons.go:234] Setting addon default-storageclass=true in "no-preload-179932"
	I0916 11:50:53.045190  353745 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:50:53.045672  353745 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:50:53.046362  353745 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:50:53.046382  353745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:50:53.046420  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:53.067056  353745 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:50:53.067095  353745 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:50:53.067169  353745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:50:53.076321  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:53.088844  353745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:50:53.210469  353745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:50:53.312446  353745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:50:53.323161  353745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:50:53.416369  353745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:50:53.603739  353745 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:50:53.605015  353745 node_ready.go:35] waiting up to 6m0s for node "no-preload-179932" to be "Ready" ...
	I0916 11:50:53.841034  353745 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:50:53.842341  353745 addons.go:510] duration metric: took 822.2633ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:50:54.107859  353745 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-179932" context rescaled to 1 replicas
	I0916 11:50:55.608268  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:50:57.608902  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:00.108773  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:02.608151  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:04.608412  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:07.108282  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:09.108982  353745 node_ready.go:53] node "no-preload-179932" has status "Ready":"False"
	I0916 11:51:10.608730  353745 node_ready.go:49] node "no-preload-179932" has status "Ready":"True"
	I0916 11:51:10.608754  353745 node_ready.go:38] duration metric: took 17.003714881s for node "no-preload-179932" to be "Ready" ...
	I0916 11:51:10.608765  353745 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:51:10.615200  353745 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.120543  353745 pod_ready.go:93] pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.120590  353745 pod_ready.go:82] duration metric: took 505.366914ms for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.120600  353745 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.124478  353745 pod_ready.go:93] pod "etcd-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.124499  353745 pod_ready.go:82] duration metric: took 3.891956ms for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.124510  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.128756  353745 pod_ready.go:93] pod "kube-apiserver-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.128778  353745 pod_ready.go:82] duration metric: took 4.260684ms for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.128790  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.132774  353745 pod_ready.go:93] pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.132795  353745 pod_ready.go:82] duration metric: took 3.997805ms for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.132806  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.409098  353745 pod_ready.go:93] pod "kube-proxy-ckd46" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.409126  353745 pod_ready.go:82] duration metric: took 276.310033ms for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.409139  353745 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.809415  353745 pod_ready.go:93] pod "kube-scheduler-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:11.809441  353745 pod_ready.go:82] duration metric: took 400.294201ms for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:11.809456  353745 pod_ready.go:39] duration metric: took 1.200676939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:51:11.809472  353745 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:51:11.809528  353745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:51:11.821759  353745 api_server.go:72] duration metric: took 18.801724291s to wait for apiserver process to appear ...
	I0916 11:51:11.821784  353745 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:51:11.821807  353745 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:51:11.825478  353745 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:51:11.826388  353745 api_server.go:141] control plane version: v1.31.1
	I0916 11:51:11.826412  353745 api_server.go:131] duration metric: took 4.6217ms to wait for apiserver health ...
	I0916 11:51:11.826420  353745 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:51:12.013073  353745 system_pods.go:59] 8 kube-system pods found
	I0916 11:51:12.013103  353745 system_pods.go:61] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:51:12.013109  353745 system_pods.go:61] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:51:12.013112  353745 system_pods.go:61] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:51:12.013116  353745 system_pods.go:61] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:51:12.013120  353745 system_pods.go:61] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:51:12.013123  353745 system_pods.go:61] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:51:12.013127  353745 system_pods.go:61] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:51:12.013133  353745 system_pods.go:61] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:51:12.013141  353745 system_pods.go:74] duration metric: took 186.714262ms to wait for pod list to return data ...
	I0916 11:51:12.013150  353745 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:51:12.209497  353745 default_sa.go:45] found service account: "default"
	I0916 11:51:12.209523  353745 default_sa.go:55] duration metric: took 196.365905ms for default service account to be created ...
	I0916 11:51:12.209532  353745 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:51:12.411009  353745 system_pods.go:86] 8 kube-system pods found
	I0916 11:51:12.411045  353745 system_pods.go:89] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:51:12.411056  353745 system_pods.go:89] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:51:12.411063  353745 system_pods.go:89] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:51:12.411069  353745 system_pods.go:89] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:51:12.411075  353745 system_pods.go:89] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:51:12.411080  353745 system_pods.go:89] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:51:12.411085  353745 system_pods.go:89] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:51:12.411090  353745 system_pods.go:89] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:51:12.411104  353745 system_pods.go:126] duration metric: took 201.565069ms to wait for k8s-apps to be running ...
	I0916 11:51:12.411116  353745 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:51:12.411160  353745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:51:12.422546  353745 system_svc.go:56] duration metric: took 11.421673ms WaitForService to wait for kubelet
	I0916 11:51:12.422583  353745 kubeadm.go:582] duration metric: took 19.402550835s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:51:12.422611  353745 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:51:12.609131  353745 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:51:12.609166  353745 node_conditions.go:123] node cpu capacity is 8
	I0916 11:51:12.609185  353745 node_conditions.go:105] duration metric: took 186.568247ms to run NodePressure ...
	I0916 11:51:12.609200  353745 start.go:241] waiting for startup goroutines ...
	I0916 11:51:12.609211  353745 start.go:246] waiting for cluster config update ...
	I0916 11:51:12.609225  353745 start.go:255] writing updated cluster config ...
	I0916 11:51:12.659042  353745 ssh_runner.go:195] Run: rm -f paused
	I0916 11:51:12.751470  353745 out.go:177] * Done! kubectl is now configured to use "no-preload-179932" cluster and "default" namespace by default
	E0916 11:51:12.791894  353745 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686014530Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=8fa9e99f-9f75-4dba-92e6-a499f81e7d6e name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686178090Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:bb97ed7cb2429a420726fbc329199f4600f59ea307bf93745052a9dd7e3f9955],Size_:63269914,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8fa9e99f-9f75-4dba-92e6-a499f81e7d6e name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686818980Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-sfxnk/coredns" id=4f48f1dd-9fe1-44ee-bc6a-ca92014a90e9 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.686894528Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.695539343Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7d28120e820275943bf61bbc418c5d626d58de7cf91c37ff58a8d3f09511b328/merged/etc/passwd: no such file or directory"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.695574739Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7d28120e820275943bf61bbc418c5d626d58de7cf91c37ff58a8d3f09511b328/merged/etc/group: no such file or directory"
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.732731855Z" level=info msg="Created container 319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c: kube-system/storage-provisioner/storage-provisioner" id=863009b1-e288-4943-af1a-62501e05710f name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.733590659Z" level=info msg="Starting container: 319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c" id=413ac176-bf5d-4bdb-85b8-9aee1826b477 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.740503723Z" level=info msg="Started container" PID=3240 containerID=319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c description=kube-system/storage-provisioner/storage-provisioner id=413ac176-bf5d-4bdb-85b8-9aee1826b477 name=/runtime.v1.RuntimeService/StartContainer sandboxID=12785168d30bd14a1cc2dc6399b74aa1137f3ce5f50dbac8ec101d017e6338ac
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.744459532Z" level=info msg="Created container 1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac: kube-system/coredns-7c65d6cfc9-sfxnk/coredns" id=4f48f1dd-9fe1-44ee-bc6a-ca92014a90e9 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.745066701Z" level=info msg="Starting container: 1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac" id=dd10f8cb-ef54-4a2d-9029-849d6f82fa90 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:51:10 no-preload-179932 crio[1039]: time="2024-09-16 11:51:10.751399545Z" level=info msg="Started container" PID=3255 containerID=1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac description=kube-system/coredns-7c65d6cfc9-sfxnk/coredns id=dd10f8cb-ef54-4a2d-9029-849d6f82fa90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b913c18240cf0e8dd7d375145b81c674010cafd0f8eb5bf5fb483007b2b3943
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.438057164Z" level=info msg="Running pod sandbox: kube-system/metrics-server-6867b74b74-xcgqq/POD" id=5e4f6ab7-f25f-4f1f-8416-0d4560867d51 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.438140283Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.451676928Z" level=info msg="Got pod network &{Name:metrics-server-6867b74b74-xcgqq Namespace:kube-system ID:4c52d240d662e0542cd21ab8633793055d46052bd8c32db45534913c971760bb UID:52862a21-d441-454e-8a52-0179b6f6c093 NetNS:/var/run/netns/ca8ed03e-60c3-4aae-ab8e-6f926152d164 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.451724078Z" level=info msg="Adding pod kube-system_metrics-server-6867b74b74-xcgqq to CNI network \"kindnet\" (type=ptp)"
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.462430106Z" level=info msg="Got pod network &{Name:metrics-server-6867b74b74-xcgqq Namespace:kube-system ID:4c52d240d662e0542cd21ab8633793055d46052bd8c32db45534913c971760bb UID:52862a21-d441-454e-8a52-0179b6f6c093 NetNS:/var/run/netns/ca8ed03e-60c3-4aae-ab8e-6f926152d164 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.462595543Z" level=info msg="Checking pod kube-system_metrics-server-6867b74b74-xcgqq for CNI network kindnet (type=ptp)"
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.465078667Z" level=info msg="Ran pod sandbox 4c52d240d662e0542cd21ab8633793055d46052bd8c32db45534913c971760bb with infra container: kube-system/metrics-server-6867b74b74-xcgqq/POD" id=5e4f6ab7-f25f-4f1f-8416-0d4560867d51 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.466350457Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e3a03fd9-8236-49f3-a015-e2b35778cafe name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.466595008Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e3a03fd9-8236-49f3-a015-e2b35778cafe name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.467070694Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=c80f48e5-926d-4499-ad96-1c1cb5864ffa name=/runtime.v1.ImageService/PullImage
	Sep 16 11:51:17 no-preload-179932 crio[1039]: time="2024-09-16 11:51:17.493787824Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:51:18 no-preload-179932 crio[1039]: time="2024-09-16 11:51:18.061522539Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9353019e-8019-4e46-96e9-1f6309eebdc1 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:51:18 no-preload-179932 crio[1039]: time="2024-09-16 11:51:18.061785417Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9353019e-8019-4e46-96e9-1f6309eebdc1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a534bc0b815b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     7 seconds ago       Running             coredns                   0                   9b913c18240cf       coredns-7c65d6cfc9-sfxnk
	319ec20c27cc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     7 seconds ago       Running             storage-provisioner       0                   12785168d30bd       storage-provisioner
	4d6a1ab5026f1       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   18 seconds ago      Running             kindnet-cni               0                   c69d7a8de2d53       kindnet-2678b
	589063428fb28       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                     22 seconds ago      Running             kube-proxy                0                   c69cfe3f95afb       kube-proxy-ckd46
	4a9a8c6b23212       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                     35 seconds ago      Running             kube-controller-manager   0                   12f1b77dcc6a5       kube-controller-manager-no-preload-179932
	6aec60ed07214       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                     35 seconds ago      Running             kube-scheduler            0                   c99d8af113358       kube-scheduler-no-preload-179932
	8d5a1ec60515c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     35 seconds ago      Running             etcd                      0                   36eff604d6002       etcd-no-preload-179932
	3a0b6ce23d737       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                     35 seconds ago      Running             kube-apiserver            0                   e61434917d78a       kube-apiserver-no-preload-179932
	
	
	==> coredns [1a534bc0b815bf4f01d80fe4c42801aab30c553653dfcf809b96bbc5bb95caac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52027 - 34155 "HINFO IN 7043137295982352462.1682836216271367565. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011352241s
	
	
	==> describe nodes <==
	Name:               no-preload-179932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-179932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-179932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_50_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:50:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-179932
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:51:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:51:10 +0000   Mon, 16 Sep 2024 11:51:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-179932
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2b5d727e19a44ae98155858b9a8e152
	  System UUID:                93f9cbba-c2f8-4376-ab54-e687ad96b58b
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sfxnk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-179932                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-2678b                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-179932             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-179932    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-ckd46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-179932             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 metrics-server-6867b74b74-xcgqq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 22s   kube-proxy       
	  Normal   Starting                 31s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  30s   kubelet          Node no-preload-179932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    30s   kubelet          Node no-preload-179932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     30s   kubelet          Node no-preload-179932 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26s   node-controller  Node no-preload-179932 event: Registered Node no-preload-179932 in Controller
	  Normal   NodeReady                8s    kubelet          Node no-preload-179932 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +2.015839] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000003] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +4.031723] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000031] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000002] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +8.194753] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000005] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000613] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000004] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a5a173559814
	[  +0.000001] ll header: 00000000: 02 42 d0 1c 76 9a 02 42 c0 a8 43 02 08 00
	[Sep16 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 7b 93 72 59 99 08 06
	[Sep16 11:38] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e c8 59 6d ba 48 08 06
	[Sep16 11:39] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 0e 56 ba 2b 08 08 06
	[  +0.072831] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 e4 c5 5d 5b cd 08 06
	
	
	==> etcd [8d5a1ec60515c3d2cf2ca04cb04d81bb6e475fd0facec6605bc2f2857dca90f5] <==
	{"level":"info","ts":"2024-09-16T11:50:43.301859Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:50:43.302121Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:50:43.302157Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:50:43.302244Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:50:43.302271Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:50:43.828178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T11:50:43.828283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.828306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:50:43.829531Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.829807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:50:43.829807Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-179932 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:50:43.829838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:50:43.830143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:50:43.830179Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:50:43.830312Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.830401Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.830433Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:50:43.831241Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:50:43.831311Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:50:43.832100Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T11:50:43.832209Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:51:18 up  1:33,  0 users,  load average: 1.27, 0.87, 0.83
	Linux no-preload-179932 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4d6a1ab5026f16f7b6b74929edce565d1b79109723753135d31aaf14d219b7b2] <==
	I0916 11:50:59.494689       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:50:59.494926       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0916 11:50:59.495223       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:50:59.495243       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:50:59.495259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:50:59.893976       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:50:59.893995       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:50:59.894000       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:51:00.094309       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:51:00.094342       1 metrics.go:61] Registering metrics
	I0916 11:51:00.094412       1 controller.go:374] Syncing nftables rules
	I0916 11:51:09.898148       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:51:09.898215       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3a0b6ce23d7370d3f0843ffa20a8f351fadb19d104cdb3b6c793368ecae40e03] <==
	E0916 11:51:17.110086       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:51:17.111447       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:51:17.209095       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.104.177.227"}
	W0916 11:51:17.214361       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:51:17.214440       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:51:17.220180       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:51:17.220238       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:51:18.105551       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:51:18.105557       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:51:18.105595       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:51:18.105688       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:51:18.106740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:51:18.106762       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4a9a8c6b232126b3a3f834266ab09739227dd047f65a57809b27690d13071f64] <==
	I0916 11:50:52.668325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:50:52.701193       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:50:52.701224       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:50:53.010133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:50:53.220087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="562.02849ms"
	I0916 11:50:53.228924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.785211ms"
	I0916 11:50:53.229036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.151µs"
	I0916 11:50:53.229283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.06µs"
	I0916 11:50:53.642413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.624ms"
	I0916 11:50:53.693844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.377531ms"
	I0916 11:50:53.694092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.73µs"
	I0916 11:51:10.344634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:10.352621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:10.357500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.304µs"
	I0916 11:51:10.378868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="91.565µs"
	I0916 11:51:11.071302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.849603ms"
	I0916 11:51:11.071412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.462µs"
	I0916 11:51:12.024328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	I0916 11:51:12.024346       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 11:51:17.137755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="16.232791ms"
	I0916 11:51:17.143267       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="5.461926ms"
	I0916 11:51:17.143368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="62.873µs"
	I0916 11:51:17.147528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="59.123µs"
	I0916 11:51:18.072924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="102.061µs"
	I0916 11:51:18.459475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-179932"
	
	
	==> kube-proxy [589063428fb28a5c87aad20f178d6bbf4342f3d4061b3649c5a14a2f2612be36] <==
	I0916 11:50:55.310468       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:50:55.424560       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:50:55.424623       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:50:55.443689       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:50:55.443760       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:50:55.445611       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:50:55.446002       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:50:55.446028       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:50:55.447139       1 config.go:328] "Starting node config controller"
	I0916 11:50:55.447220       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:50:55.447186       1 config.go:199] "Starting service config controller"
	I0916 11:50:55.447262       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:50:55.447132       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:50:55.447289       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:50:55.547414       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:50:55.547439       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:50:55.547449       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6aec60ed072148d0a4ddf5d94e307f15b744a472ca2e73827876970e20146006] <==
	W0916 11:50:45.313624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0916 11:50:45.313521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.313641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 11:50:45.313653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.313693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:50:45.313744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.313956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:50:45.313988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:45.314171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:45.314196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.120634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:50:46.120679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.331027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:50:46.331070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.353666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:50:46.353722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.411296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:50:46.411335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.429940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:50:46.429988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:50:46.458442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:50:46.458490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 11:50:46.910182       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403029    2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87 podName:2c024fac-4113-4c1b-8b50-3e066e7b9b67 nodeName:}" failed. No retries permitted until 2024-09-16 11:50:54.902996386 +0000 UTC m=+7.050915339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltv87" (UniqueName: "kubernetes.io/projected/2c024fac-4113-4c1b-8b50-3e066e7b9b67-kube-api-access-ltv87") pod "kube-proxy-ckd46" (UID: "2c024fac-4113-4c1b-8b50-3e066e7b9b67") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: E0916 11:50:54.403062    2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-kube-api-access-mpmnk podName:28d0afc4-03fd-4b6e-8ced-8b440d6153ff nodeName:}" failed. No retries permitted until 2024-09-16 11:50:54.903050076 +0000 UTC m=+7.050969022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mpmnk" (UniqueName: "kubernetes.io/projected/28d0afc4-03fd-4b6e-8ced-8b440d6153ff-kube-api-access-mpmnk") pod "kindnet-2678b" (UID: "28d0afc4-03fd-4b6e-8ced-8b440d6153ff") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:50:54 no-preload-179932 kubelet[2607]: I0916 11:50:54.905280    2607 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:50:56 no-preload-179932 kubelet[2607]: I0916 11:50:56.027618    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ckd46" podStartSLOduration=3.027598211 podStartE2EDuration="3.027598211s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:50:56.027415248 +0000 UTC m=+8.175334204" watchObservedRunningTime="2024-09-16 11:50:56.027598211 +0000 UTC m=+8.175517164"
	Sep 16 11:50:58 no-preload-179932 kubelet[2607]: E0916 11:50:58.016042    2607 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487458015815892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92080,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:50:58 no-preload-179932 kubelet[2607]: E0916 11:50:58.016091    2607 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487458015815892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92080,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:02 no-preload-179932 kubelet[2607]: I0916 11:51:02.695100    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2678b" podStartSLOduration=5.586249323 podStartE2EDuration="9.695079637s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="2024-09-16 11:50:55.227033013 +0000 UTC m=+7.374951948" lastFinishedPulling="2024-09-16 11:50:59.335863327 +0000 UTC m=+11.483782262" observedRunningTime="2024-09-16 11:51:00.036700007 +0000 UTC m=+12.184618972" watchObservedRunningTime="2024-09-16 11:51:02.695079637 +0000 UTC m=+14.842998616"
	Sep 16 11:51:08 no-preload-179932 kubelet[2607]: E0916 11:51:08.017291    2607 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487468017096773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:08 no-preload-179932 kubelet[2607]: E0916 11:51:08.017367    2607 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487468017096773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.337301    2607 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518235    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qdf\" (UniqueName: \"kubernetes.io/projected/ec2c3f40-5323-4dce-ae07-29c4537f3067-kube-api-access-24qdf\") pod \"coredns-7c65d6cfc9-sfxnk\" (UID: \"ec2c3f40-5323-4dce-ae07-29c4537f3067\") " pod="kube-system/coredns-7c65d6cfc9-sfxnk"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518285    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdhnp\" (UniqueName: \"kubernetes.io/projected/040e8794-ddea-4f91-b709-cb999b3c71d5-kube-api-access-tdhnp\") pod \"storage-provisioner\" (UID: \"040e8794-ddea-4f91-b709-cb999b3c71d5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518302    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec2c3f40-5323-4dce-ae07-29c4537f3067-config-volume\") pod \"coredns-7c65d6cfc9-sfxnk\" (UID: \"ec2c3f40-5323-4dce-ae07-29c4537f3067\") " pod="kube-system/coredns-7c65d6cfc9-sfxnk"
	Sep 16 11:51:10 no-preload-179932 kubelet[2607]: I0916 11:51:10.518330    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/040e8794-ddea-4f91-b709-cb999b3c71d5-tmp\") pod \"storage-provisioner\" (UID: \"040e8794-ddea-4f91-b709-cb999b3c71d5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:51:11 no-preload-179932 kubelet[2607]: I0916 11:51:11.055193    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=18.055168777 podStartE2EDuration="18.055168777s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:51:11.055129269 +0000 UTC m=+23.203048223" watchObservedRunningTime="2024-09-16 11:51:11.055168777 +0000 UTC m=+23.203087726"
	Sep 16 11:51:11 no-preload-179932 kubelet[2607]: I0916 11:51:11.065541    2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sfxnk" podStartSLOduration=18.06551962 podStartE2EDuration="18.06551962s" podCreationTimestamp="2024-09-16 11:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:51:11.065119525 +0000 UTC m=+23.213038480" watchObservedRunningTime="2024-09-16 11:51:11.06551962 +0000 UTC m=+23.213438552"
	Sep 16 11:51:17 no-preload-179932 kubelet[2607]: I0916 11:51:17.254858    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvkt\" (UniqueName: \"kubernetes.io/projected/52862a21-d441-454e-8a52-0179b6f6c093-kube-api-access-kwvkt\") pod \"metrics-server-6867b74b74-xcgqq\" (UID: \"52862a21-d441-454e-8a52-0179b6f6c093\") " pod="kube-system/metrics-server-6867b74b74-xcgqq"
	Sep 16 11:51:17 no-preload-179932 kubelet[2607]: I0916 11:51:17.254917    2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/52862a21-d441-454e-8a52-0179b6f6c093-tmp-dir\") pod \"metrics-server-6867b74b74-xcgqq\" (UID: \"52862a21-d441-454e-8a52-0179b6f6c093\") " pod="kube-system/metrics-server-6867b74b74-xcgqq"
	Sep 16 11:51:17 no-preload-179932 kubelet[2607]: E0916 11:51:17.527478    2607 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:51:17 no-preload-179932 kubelet[2607]: E0916 11:51:17.527562    2607 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:51:17 no-preload-179932 kubelet[2607]: E0916 11:51:17.527771    2607 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwvkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-xcgqq_kube-system(52862a21-d441-454e-8a52-0179b6f6c093): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 16 11:51:17 no-preload-179932 kubelet[2607]: E0916 11:51:17.528990    2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-xcgqq" podUID="52862a21-d441-454e-8a52-0179b6f6c093"
	Sep 16 11:51:18 no-preload-179932 kubelet[2607]: E0916 11:51:18.018447    2607 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487478018234891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:18 no-preload-179932 kubelet[2607]: E0916 11:51:18.018485    2607 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487478018234891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:102273,},InodesUsed:&UInt64Value{Value:49,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:51:18 no-preload-179932 kubelet[2607]: E0916 11:51:18.062031    2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xcgqq" podUID="52862a21-d441-454e-8a52-0179b6f6c093"
	
	
	==> storage-provisioner [319ec20c27cc4fe4089d379b239c1c595836d126b1075f5ba21e8a7f54790e1c] <==
	I0916 11:51:10.752747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:51:10.762574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:51:10.762667       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:51:10.798892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:51:10.799029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6492543-a96c-4e35-8fc0-19e6c7bc9c6d", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9 became leader
	I0916 11:51:10.799116       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9!
	I0916 11:51:10.899335       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-179932_af81e078-dbe8-447d-8e1d-3559ecc560e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (837.825µs)
helpers_test.go:263: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qznkx" [2e06c663-e6f4-4dc5-96d5-e2c7c06a77c6] Running
E0916 11:55:54.648510   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004222971s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-179932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-179932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (644.713µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-179932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-179932
helpers_test.go:235: (dbg) docker inspect no-preload-179932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db",
	        "Created": "2024-09-16T11:50:18.324141753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:51:25.436800555Z",
	            "FinishedAt": "2024-09-16T11:51:24.525412229Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hostname",
	        "HostsPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/hosts",
	        "LogPath": "/var/lib/docker/containers/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db/33415cb7fa837265ef4e5c0ac0810f2c57749f9ba237d5aad908be797bd7f1db-json.log",
	        "Name": "/no-preload-179932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-179932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-179932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65101d9df3c67c2dd006de9217bbcccb2398eeaf7f8a706f29e2be9009008a23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-179932",
	                "Source": "/var/lib/docker/volumes/no-preload-179932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-179932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-179932",
	                "name.minikube.sigs.k8s.io": "no-preload-179932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9edfd13e437afa70554e4714d574c4d44e508abc2985e77b4e65d80c23a580a5",
	            "SandboxKey": "/var/run/docker/netns/9edfd13e437a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-179932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3318c5c795cbdaf6a4546ff9f05fc1f3534565776857632d9afa204a3c5ca91f",
	                    "EndpointID": "990da0038a16481a222f341e7c5470b6fff67007eb8b1ed1db363c5642142991",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-179932",
	                        "33415cb7fa83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-179932 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-179932 logs -n 25: (1.219702287s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467 sudo cat                  | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467        | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-179932             | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-179932                  | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:51:25
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:51:25.048495  360990 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:51:25.048611  360990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:51:25.048622  360990 out.go:358] Setting ErrFile to fd 2...
	I0916 11:51:25.048629  360990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:51:25.048838  360990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:51:25.049436  360990 out.go:352] Setting JSON to false
	I0916 11:51:25.050613  360990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5625,"bootTime":1726481860,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:51:25.050679  360990 start.go:139] virtualization: kvm guest
	I0916 11:51:25.053040  360990 out.go:177] * [no-preload-179932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:51:25.054593  360990 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:51:25.054619  360990 notify.go:220] Checking for updates...
	I0916 11:51:25.057618  360990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:51:25.058937  360990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:51:25.060451  360990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:51:25.062178  360990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:51:25.063540  360990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:51:25.065582  360990 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:51:25.066065  360990 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:51:25.090061  360990 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:51:25.090194  360990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:51:25.144720  360990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:51:25.13520681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:51:25.144831  360990 docker.go:318] overlay module found
	I0916 11:51:25.146800  360990 out.go:177] * Using the docker driver based on existing profile
	I0916 11:51:25.148106  360990 start.go:297] selected driver: docker
	I0916 11:51:25.148120  360990 start.go:901] validating driver "docker" against &{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:51:25.148202  360990 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:51:25.148905  360990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:51:25.200022  360990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:51:25.190798113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:51:25.200432  360990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:51:25.200474  360990 cni.go:84] Creating CNI manager for ""
	I0916 11:51:25.200520  360990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:51:25.200576  360990 start.go:340] cluster config:
	{Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:51:25.202626  360990 out.go:177] * Starting "no-preload-179932" primary control-plane node in "no-preload-179932" cluster
	I0916 11:51:25.203958  360990 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:51:25.205499  360990 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:51:25.206946  360990 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:51:25.207037  360990 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:51:25.207101  360990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:51:25.207251  360990 cache.go:107] acquiring lock: {Name:mk871ae736ce09ba2b4421598649b9ecfc9a98bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207324  360990 cache.go:107] acquiring lock: {Name:mk0d227841b16d1443985320c46c5945df5de856 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207361  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:51:25.207335  360990 cache.go:107] acquiring lock: {Name:mkbb0d7522afd30851ddf834442136fb3567a26a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207352  360990 cache.go:107] acquiring lock: {Name:mk8b23bbceb92ce965299065ca3d25050387467b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207375  360990 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.617µs
	I0916 11:51:25.207389  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:51:25.207395  360990 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:51:25.207397  360990 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 85.233µs
	I0916 11:51:25.207407  360990 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:51:25.207407  360990 cache.go:107] acquiring lock: {Name:mkc9fa4e48807b59cdf7eefb19d5245546dc831d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207417  360990 cache.go:107] acquiring lock: {Name:mkf3f21a53f01d1ee0608b28c94cf582dc8c355f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207406  360990 cache.go:107] acquiring lock: {Name:mk540470437675d9c95f2acaf015b6015148e24f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207447  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:51:25.207454  360990 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 50.644µs
	I0916 11:51:25.207461  360990 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:51:25.207441  360990 cache.go:107] acquiring lock: {Name:mkfcf90f9df5885fe87d6ff86cdb7f8f58dec344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.207481  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:51:25.207484  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:51:25.207488  360990 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 204.84µs
	I0916 11:51:25.207497  360990 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:51:25.207502  360990 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 85.954µs
	I0916 11:51:25.207520  360990 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:51:25.207511  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:51:25.207538  360990 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 102.8µs
	I0916 11:51:25.207553  360990 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:51:25.207541  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:51:25.207577  360990 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 236.57µs
	I0916 11:51:25.207596  360990 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:51:25.207598  360990 cache.go:115] /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:51:25.207611  360990 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 274.607µs
	I0916 11:51:25.207631  360990 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:51:25.207654  360990 cache.go:87] Successfully saved all images to host disk.
	W0916 11:51:25.233819  360990 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:51:25.233846  360990 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:51:25.233930  360990 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:51:25.233951  360990 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:51:25.233959  360990 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:51:25.233968  360990 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:51:25.233979  360990 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:51:25.297055  360990 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:51:25.297106  360990 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:51:25.297146  360990 start.go:360] acquireMachinesLock for no-preload-179932: {Name:mkd475c3f7aed9017143023aeb4fceb62fe6c60d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:51:25.297221  360990 start.go:364] duration metric: took 52.915µs to acquireMachinesLock for "no-preload-179932"
	I0916 11:51:25.297246  360990 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:51:25.297253  360990 fix.go:54] fixHost starting: 
	I0916 11:51:25.297563  360990 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:51:25.315488  360990 fix.go:112] recreateIfNeeded on no-preload-179932: state=Stopped err=<nil>
	W0916 11:51:25.315539  360990 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:51:25.318564  360990 out.go:177] * Restarting existing docker container for "no-preload-179932" ...
	I0916 11:51:25.319929  360990 cli_runner.go:164] Run: docker start no-preload-179932
	I0916 11:51:25.599136  360990 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:51:25.618631  360990 kic.go:430] container "no-preload-179932" state is running.
	I0916 11:51:25.619080  360990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:51:25.639428  360990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/config.json ...
	I0916 11:51:25.639713  360990 machine.go:93] provisionDockerMachine start ...
	I0916 11:51:25.639791  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:25.658603  360990 main.go:141] libmachine: Using SSH client type: native
	I0916 11:51:25.658896  360990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0916 11:51:25.658917  360990 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:51:25.659611  360990 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48192->127.0.0.1:33103: read: connection reset by peer
	I0916 11:51:28.796874  360990 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:51:28.796907  360990 ubuntu.go:169] provisioning hostname "no-preload-179932"
	I0916 11:51:28.796982  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:28.814954  360990 main.go:141] libmachine: Using SSH client type: native
	I0916 11:51:28.815121  360990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0916 11:51:28.815134  360990 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-179932 && echo "no-preload-179932" | sudo tee /etc/hostname
	I0916 11:51:28.960570  360990 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179932
	
	I0916 11:51:28.960646  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:28.978929  360990 main.go:141] libmachine: Using SSH client type: native
	I0916 11:51:28.979234  360990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0916 11:51:28.979269  360990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-179932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-179932/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-179932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:51:29.113763  360990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:51:29.113794  360990 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:51:29.113819  360990 ubuntu.go:177] setting up certificates
	I0916 11:51:29.113830  360990 provision.go:84] configureAuth start
	I0916 11:51:29.113885  360990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:51:29.130716  360990 provision.go:143] copyHostCerts
	I0916 11:51:29.130783  360990 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:51:29.130796  360990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:51:29.130870  360990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:51:29.131060  360990 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:51:29.131075  360990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:51:29.131119  360990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:51:29.131196  360990 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:51:29.131206  360990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:51:29.131240  360990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:51:29.131312  360990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.no-preload-179932 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-179932]
	I0916 11:51:29.373124  360990 provision.go:177] copyRemoteCerts
	I0916 11:51:29.373197  360990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:51:29.373234  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:29.391637  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:29.485834  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:51:29.507686  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:51:29.531266  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:51:29.554130  360990 provision.go:87] duration metric: took 440.286054ms to configureAuth
	I0916 11:51:29.554167  360990 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:51:29.554337  360990 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:51:29.554420  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:29.571421  360990 main.go:141] libmachine: Using SSH client type: native
	I0916 11:51:29.571624  360990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0916 11:51:29.571656  360990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:51:29.864403  360990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:51:29.864431  360990 machine.go:96] duration metric: took 4.224700903s to provisionDockerMachine
	I0916 11:51:29.864448  360990 start.go:293] postStartSetup for "no-preload-179932" (driver="docker")
	I0916 11:51:29.864461  360990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:51:29.864535  360990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:51:29.864594  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:29.882872  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:29.978255  360990 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:51:29.981386  360990 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:51:29.981418  360990 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:51:29.981426  360990 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:51:29.981432  360990 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:51:29.981443  360990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:51:29.981493  360990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:51:29.981569  360990 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:51:29.981664  360990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:51:29.989722  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:51:30.012325  360990 start.go:296] duration metric: took 147.863126ms for postStartSetup
	I0916 11:51:30.012406  360990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:51:30.012450  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:30.031019  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:30.126063  360990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:51:30.130150  360990 fix.go:56] duration metric: took 4.832890704s for fixHost
	I0916 11:51:30.130173  360990 start.go:83] releasing machines lock for "no-preload-179932", held for 4.832939332s
	I0916 11:51:30.130239  360990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179932
	I0916 11:51:30.147415  360990 ssh_runner.go:195] Run: cat /version.json
	I0916 11:51:30.147465  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:30.147496  360990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:51:30.147590  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:30.165027  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:30.166039  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:30.328891  360990 ssh_runner.go:195] Run: systemctl --version
	I0916 11:51:30.333690  360990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:51:30.472394  360990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:51:30.476773  360990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:51:30.484963  360990 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:51:30.485041  360990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:51:30.493617  360990 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:51:30.493654  360990 start.go:495] detecting cgroup driver to use...
	I0916 11:51:30.493693  360990 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:51:30.493744  360990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:51:30.505512  360990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:51:30.516723  360990 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:51:30.516785  360990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:51:30.529899  360990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:51:30.541402  360990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:51:30.621726  360990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:51:30.705150  360990 docker.go:233] disabling docker service ...
	I0916 11:51:30.705208  360990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:51:30.718214  360990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:51:30.729402  360990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:51:30.807492  360990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:51:30.891400  360990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:51:30.902956  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:51:30.919463  360990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:51:30.919515  360990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:51:30.929090  360990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:51:30.929143  360990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:51:30.938871  360990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:51:30.948352  360990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:51:30.958281  360990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:51:30.967762  360990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:51:30.978006  360990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:51:30.987652  360990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:51:30.997617  360990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:51:31.006096  360990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:51:31.014455  360990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:51:31.096752  360990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:51:31.193675  360990 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:51:31.193748  360990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:51:31.197085  360990 start.go:563] Will wait 60s for crictl version
	I0916 11:51:31.197131  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:51:31.200530  360990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:51:31.234313  360990 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:51:31.234402  360990 ssh_runner.go:195] Run: crio --version
	I0916 11:51:31.269362  360990 ssh_runner.go:195] Run: crio --version
	I0916 11:51:31.307593  360990 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:51:31.309117  360990 cli_runner.go:164] Run: docker network inspect no-preload-179932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:51:31.327234  360990 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:51:31.330900  360990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:51:31.341099  360990 kubeadm.go:883] updating cluster {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:51:31.341211  360990 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:51:31.341251  360990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:51:31.381854  360990 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:51:31.381877  360990 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:51:31.381889  360990 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 11:51:31.381995  360990 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-179932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:51:31.382077  360990 ssh_runner.go:195] Run: crio config
	I0916 11:51:31.425089  360990 cni.go:84] Creating CNI manager for ""
	I0916 11:51:31.425113  360990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:51:31.425124  360990 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:51:31.425150  360990 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-179932 NodeName:no-preload-179932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:51:31.425312  360990 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-179932"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:51:31.425411  360990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:51:31.433835  360990 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:51:31.433903  360990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:51:31.443202  360990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0916 11:51:31.461133  360990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:51:31.477512  360990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0916 11:51:31.493875  360990 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:51:31.497042  360990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:51:31.507678  360990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:51:31.584126  360990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:51:31.596380  360990 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932 for IP: 192.168.103.2
	I0916 11:51:31.596402  360990 certs.go:194] generating shared ca certs ...
	I0916 11:51:31.596422  360990 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:51:31.596572  360990 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:51:31.596634  360990 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:51:31.596648  360990 certs.go:256] generating profile certs ...
	I0916 11:51:31.596764  360990 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.key
	I0916 11:51:31.596830  360990 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key.a7025391
	I0916 11:51:31.596889  360990 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key
	I0916 11:51:31.597028  360990 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:51:31.597068  360990 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:51:31.597082  360990 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:51:31.597118  360990 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:51:31.597152  360990 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:51:31.597187  360990 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:51:31.597243  360990 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:51:31.598005  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:51:31.623014  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:51:31.647996  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:51:31.704835  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:51:31.730123  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:51:31.755562  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:51:31.795587  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:51:31.819909  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:51:31.843317  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:51:31.867603  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:51:31.890296  360990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:51:31.913062  360990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:51:31.929814  360990 ssh_runner.go:195] Run: openssl version
	I0916 11:51:31.935115  360990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:51:31.944618  360990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:51:31.947977  360990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:51:31.948034  360990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:51:31.954970  360990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:51:31.964065  360990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:51:31.973358  360990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:51:31.976656  360990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:51:31.976713  360990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:51:31.983251  360990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:51:31.991665  360990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:51:32.000408  360990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:51:32.003720  360990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:51:32.003775  360990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:51:32.010302  360990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:51:32.018598  360990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:51:32.022143  360990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:51:32.028412  360990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:51:32.035332  360990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:51:32.041713  360990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:51:32.048223  360990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:51:32.054609  360990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:51:32.060531  360990 kubeadm.go:392] StartCluster: {Name:no-preload-179932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-179932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:51:32.060615  360990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:51:32.060664  360990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:51:32.094530  360990 cri.go:89] found id: ""
	I0916 11:51:32.094585  360990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:51:32.103126  360990 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:51:32.103147  360990 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:51:32.103185  360990 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:51:32.111528  360990 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:51:32.112257  360990 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-179932" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:51:32.112737  360990 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-179932" cluster setting kubeconfig missing "no-preload-179932" context setting]
	I0916 11:51:32.113492  360990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:51:32.115522  360990 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:51:32.124233  360990 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 11:51:32.124264  360990 kubeadm.go:597] duration metric: took 21.112253ms to restartPrimaryControlPlane
	I0916 11:51:32.124277  360990 kubeadm.go:394] duration metric: took 63.754619ms to StartCluster
	I0916 11:51:32.124297  360990 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:51:32.124363  360990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:51:32.126194  360990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:51:32.126428  360990 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:51:32.126631  360990 config.go:182] Loaded profile config "no-preload-179932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:51:32.126555  360990 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:51:32.126687  360990 addons.go:69] Setting storage-provisioner=true in profile "no-preload-179932"
	I0916 11:51:32.126695  360990 addons.go:69] Setting metrics-server=true in profile "no-preload-179932"
	I0916 11:51:32.126698  360990 addons.go:69] Setting default-storageclass=true in profile "no-preload-179932"
	I0916 11:51:32.126720  360990 addons.go:69] Setting dashboard=true in profile "no-preload-179932"
	I0916 11:51:32.126742  360990 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-179932"
	I0916 11:51:32.126746  360990 addons.go:234] Setting addon dashboard=true in "no-preload-179932"
	W0916 11:51:32.126760  360990 addons.go:243] addon dashboard should already be in state true
	I0916 11:51:32.126709  360990 addons.go:234] Setting addon storage-provisioner=true in "no-preload-179932"
	W0916 11:51:32.126813  360990 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:51:32.126855  360990 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:51:32.126798  360990 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:51:32.126709  360990 addons.go:234] Setting addon metrics-server=true in "no-preload-179932"
	W0916 11:51:32.126919  360990 addons.go:243] addon metrics-server should already be in state true
	I0916 11:51:32.126970  360990 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:51:32.127094  360990 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:51:32.127247  360990 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:51:32.127273  360990 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:51:32.127430  360990 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:51:32.128856  360990 out.go:177] * Verifying Kubernetes components...
	I0916 11:51:32.130931  360990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:51:32.153011  360990 addons.go:234] Setting addon default-storageclass=true in "no-preload-179932"
	W0916 11:51:32.153037  360990 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:51:32.153068  360990 host.go:66] Checking if "no-preload-179932" exists ...
	I0916 11:51:32.153552  360990 cli_runner.go:164] Run: docker container inspect no-preload-179932 --format={{.State.Status}}
	I0916 11:51:32.155442  360990 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:51:32.157274  360990 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:51:32.159080  360990 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:51:32.159092  360990 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:51:32.159110  360990 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:51:32.159082  360990 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:51:32.159174  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:32.160749  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:51:32.160779  360990 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:51:32.160828  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:32.160901  360990 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:51:32.160917  360990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:51:32.160961  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:32.185543  360990 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:51:32.185569  360990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:51:32.185629  360990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179932
	I0916 11:51:32.188179  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:32.192722  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:32.200498  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:32.222649  360990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/no-preload-179932/id_rsa Username:docker}
	I0916 11:51:32.396731  360990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:51:32.417778  360990 node_ready.go:35] waiting up to 6m0s for node "no-preload-179932" to be "Ready" ...
	I0916 11:51:32.419771  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:51:32.419790  360990 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:51:32.495260  360990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:51:32.496145  360990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:51:32.496165  360990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:51:32.513083  360990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:51:32.515960  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:51:32.515986  360990 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:51:32.614097  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:51:32.614127  360990 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:51:32.617196  360990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:51:32.617222  360990 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:51:32.718645  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:51:32.718677  360990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 11:51:32.802370  360990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:51:32.802399  360990 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:51:32.895905  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:51:32.895938  360990 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0916 11:51:32.905228  360990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:51:32.907517  360990 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:51:32.907546  360990 retry.go:31] will retry after 199.72194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 11:51:32.907585  360990 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:51:32.907596  360990 retry.go:31] will retry after 199.468423ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:51:32.918612  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:51:32.918641  360990 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:51:33.006281  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:51:33.006304  360990 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0916 11:51:33.026138  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:51:33.026223  360990 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:51:33.107534  360990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:51:33.107544  360990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:51:33.110379  360990 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:51:33.110402  360990 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:51:33.196044  360990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:51:35.303279  360990 node_ready.go:49] node "no-preload-179932" has status "Ready":"True"
	I0916 11:51:35.303322  360990 node_ready.go:38] duration metric: took 2.885495872s for node "no-preload-179932" to be "Ready" ...
	I0916 11:51:35.303336  360990 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:51:35.322397  360990 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.405521  360990 pod_ready.go:93] pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:35.405600  360990 pod_ready.go:82] duration metric: took 83.172539ms for pod "coredns-7c65d6cfc9-sfxnk" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.405632  360990 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.415490  360990 pod_ready.go:93] pod "etcd-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:35.415525  360990 pod_ready.go:82] duration metric: took 9.87435ms for pod "etcd-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.415543  360990 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.500368  360990 pod_ready.go:93] pod "kube-apiserver-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:35.500399  360990 pod_ready.go:82] duration metric: took 84.846477ms for pod "kube-apiserver-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.500416  360990 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.519626  360990 pod_ready.go:93] pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:35.519658  360990 pod_ready.go:82] duration metric: took 19.225281ms for pod "kube-controller-manager-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.519672  360990 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.602586  360990 pod_ready.go:93] pod "kube-proxy-ckd46" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:35.602616  360990 pod_ready.go:82] duration metric: took 82.936466ms for pod "kube-proxy-ckd46" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.602630  360990 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.908484  360990 pod_ready.go:93] pod "kube-scheduler-no-preload-179932" in "kube-system" namespace has status "Ready":"True"
	I0916 11:51:35.908516  360990 pod_ready.go:82] duration metric: took 305.877443ms for pod "kube-scheduler-no-preload-179932" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:35.908533  360990 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace to be "Ready" ...
	I0916 11:51:37.536960  360990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.631680611s)
	I0916 11:51:37.536997  360990 addons.go:475] Verifying addon metrics-server=true in "no-preload-179932"
	I0916 11:51:37.537002  360990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.429428755s)
	I0916 11:51:37.537063  360990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.429409682s)
	I0916 11:51:37.697534  360990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.501446189s)
	I0916 11:51:37.699597  360990 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-179932 addons enable metrics-server
	
	I0916 11:51:37.701070  360990 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0916 11:51:37.702544  360990 addons.go:510] duration metric: took 5.575986945s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0916 11:51:37.914033  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:39.915527  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:42.415408  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:44.914672  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:47.414387  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:49.914244  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:52.414768  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:54.914045  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:56.914518  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:51:58.914889  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:01.414462  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:03.414933  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:05.913854  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:07.914357  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:10.414430  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:12.414531  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:14.914723  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:17.414858  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:19.914187  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:21.914520  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:24.414136  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:26.414316  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:28.416283  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:30.913963  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:33.414886  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:35.914557  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:37.915262  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:40.414989  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:42.914174  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:45.414717  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:47.414777  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:49.913988  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:51.915002  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:54.415161  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:56.914033  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:52:58.914184  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:00.914752  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:02.914883  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:05.414757  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:07.914367  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:09.914818  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:12.414596  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:14.913877  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:16.914581  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:19.415052  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:21.914839  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:23.915095  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:26.413927  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:28.414154  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:30.414608  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:32.415384  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:34.914677  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:36.914731  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:39.413850  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:41.414741  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:43.414927  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:45.914132  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:47.914700  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:50.415166  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:52.914210  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:54.914515  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:56.914772  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:53:59.414754  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:01.914031  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:03.914438  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:05.914492  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:07.915301  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:10.414317  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:12.414886  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:14.914170  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:16.916146  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:19.414220  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:21.914695  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:24.414758  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:26.915594  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:29.414422  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:31.414804  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:33.914406  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:36.414449  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:38.414830  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:40.414990  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:42.915247  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:45.414112  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:47.914727  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:50.414363  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:52.414599  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:54.914816  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:57.414381  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:54:59.414995  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:01.415100  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:03.415318  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:05.427537  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:07.914101  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:10.413877  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:12.414630  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:14.914065  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:16.915135  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:19.414873  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:21.414930  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:23.415174  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:25.913948  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:27.914808  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:30.414779  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:32.915303  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:35.414710  360990 pod_ready.go:103] pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace has status "Ready":"False"
	I0916 11:55:35.914461  360990 pod_ready.go:82] duration metric: took 4m0.005914023s for pod "metrics-server-6867b74b74-xcgqq" in "kube-system" namespace to be "Ready" ...
	E0916 11:55:35.914483  360990 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:55:35.914490  360990 pod_ready.go:39] duration metric: took 4m0.611144401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:55:35.914507  360990 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:55:35.914537  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:55:35.914593  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:55:35.949122  360990 cri.go:89] found id: "03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c"
	I0916 11:55:35.949149  360990 cri.go:89] found id: ""
	I0916 11:55:35.949158  360990 logs.go:276] 1 containers: [03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c]
	I0916 11:55:35.949218  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:35.952681  360990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:55:35.952748  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:55:35.985451  360990 cri.go:89] found id: "044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b"
	I0916 11:55:35.985475  360990 cri.go:89] found id: ""
	I0916 11:55:35.985484  360990 logs.go:276] 1 containers: [044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b]
	I0916 11:55:35.985545  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:35.989053  360990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:55:35.989130  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:55:36.023888  360990 cri.go:89] found id: "15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37"
	I0916 11:55:36.023908  360990 cri.go:89] found id: ""
	I0916 11:55:36.023917  360990 logs.go:276] 1 containers: [15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37]
	I0916 11:55:36.023976  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.027663  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:55:36.027730  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:55:36.061147  360990 cri.go:89] found id: "1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf"
	I0916 11:55:36.061166  360990 cri.go:89] found id: ""
	I0916 11:55:36.061173  360990 logs.go:276] 1 containers: [1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf]
	I0916 11:55:36.061223  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.064735  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:55:36.064802  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:55:36.097263  360990 cri.go:89] found id: "ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec"
	I0916 11:55:36.097282  360990 cri.go:89] found id: ""
	I0916 11:55:36.097289  360990 logs.go:276] 1 containers: [ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec]
	I0916 11:55:36.097378  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.100963  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:55:36.101026  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:55:36.135422  360990 cri.go:89] found id: "47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e"
	I0916 11:55:36.135440  360990 cri.go:89] found id: ""
	I0916 11:55:36.135447  360990 logs.go:276] 1 containers: [47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e]
	I0916 11:55:36.135485  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.139053  360990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:55:36.139113  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:55:36.173233  360990 cri.go:89] found id: "532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606"
	I0916 11:55:36.173256  360990 cri.go:89] found id: ""
	I0916 11:55:36.173263  360990 logs.go:276] 1 containers: [532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606]
	I0916 11:55:36.173315  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.177018  360990 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:55:36.177086  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:55:36.210878  360990 cri.go:89] found id: "e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e"
	I0916 11:55:36.210902  360990 cri.go:89] found id: ""
	I0916 11:55:36.210911  360990 logs.go:276] 1 containers: [e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e]
	I0916 11:55:36.210961  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.214689  360990 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:55:36.214753  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:55:36.248930  360990 cri.go:89] found id: "55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b"
	I0916 11:55:36.248954  360990 cri.go:89] found id: "4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad"
	I0916 11:55:36.248958  360990 cri.go:89] found id: ""
	I0916 11:55:36.248964  360990 logs.go:276] 2 containers: [55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b 4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad]
	I0916 11:55:36.249009  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.252892  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:36.256249  360990 logs.go:123] Gathering logs for dmesg ...
	I0916 11:55:36.256277  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:55:36.277616  360990 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:55:36.277655  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:55:36.372377  360990 logs.go:123] Gathering logs for coredns [15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37] ...
	I0916 11:55:36.372412  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37"
	I0916 11:55:36.409620  360990 logs.go:123] Gathering logs for kubernetes-dashboard [e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e] ...
	I0916 11:55:36.409647  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e"
	I0916 11:55:36.445781  360990 logs.go:123] Gathering logs for container status ...
	I0916 11:55:36.445807  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:55:36.484253  360990 logs.go:123] Gathering logs for etcd [044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b] ...
	I0916 11:55:36.484278  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b"
	I0916 11:55:36.525778  360990 logs.go:123] Gathering logs for kube-scheduler [1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf] ...
	I0916 11:55:36.525811  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf"
	I0916 11:55:36.560575  360990 logs.go:123] Gathering logs for kube-controller-manager [47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e] ...
	I0916 11:55:36.560604  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e"
	I0916 11:55:36.610764  360990 logs.go:123] Gathering logs for kindnet [532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606] ...
	I0916 11:55:36.610800  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606"
	I0916 11:55:36.647391  360990 logs.go:123] Gathering logs for storage-provisioner [55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b] ...
	I0916 11:55:36.647427  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b"
	I0916 11:55:36.683277  360990 logs.go:123] Gathering logs for storage-provisioner [4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad] ...
	I0916 11:55:36.683302  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad"
	I0916 11:55:36.718703  360990 logs.go:123] Gathering logs for kubelet ...
	I0916 11:55:36.718737  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:55:36.785465  360990 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:55:36.785499  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:55:36.846080  360990 logs.go:123] Gathering logs for kube-apiserver [03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c] ...
	I0916 11:55:36.846120  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c"
	I0916 11:55:36.887903  360990 logs.go:123] Gathering logs for kube-proxy [ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec] ...
	I0916 11:55:36.887939  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec"
	I0916 11:55:39.423226  360990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:55:39.435238  360990 api_server.go:72] duration metric: took 4m7.308778087s to wait for apiserver process to appear ...
	I0916 11:55:39.435271  360990 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:55:39.435311  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:55:39.435355  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:55:39.470535  360990 cri.go:89] found id: "03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c"
	I0916 11:55:39.470558  360990 cri.go:89] found id: ""
	I0916 11:55:39.470565  360990 logs.go:276] 1 containers: [03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c]
	I0916 11:55:39.470607  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.473951  360990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:55:39.474007  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:55:39.507861  360990 cri.go:89] found id: "044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b"
	I0916 11:55:39.507882  360990 cri.go:89] found id: ""
	I0916 11:55:39.507889  360990 logs.go:276] 1 containers: [044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b]
	I0916 11:55:39.507930  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.511510  360990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:55:39.511566  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:55:39.545262  360990 cri.go:89] found id: "15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37"
	I0916 11:55:39.545284  360990 cri.go:89] found id: ""
	I0916 11:55:39.545292  360990 logs.go:276] 1 containers: [15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37]
	I0916 11:55:39.545366  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.549181  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:55:39.549246  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:55:39.582114  360990 cri.go:89] found id: "1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf"
	I0916 11:55:39.582135  360990 cri.go:89] found id: ""
	I0916 11:55:39.582144  360990 logs.go:276] 1 containers: [1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf]
	I0916 11:55:39.582201  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.585543  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:55:39.585603  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:55:39.618768  360990 cri.go:89] found id: "ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec"
	I0916 11:55:39.618790  360990 cri.go:89] found id: ""
	I0916 11:55:39.618798  360990 logs.go:276] 1 containers: [ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec]
	I0916 11:55:39.618840  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.622364  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:55:39.622420  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:55:39.655857  360990 cri.go:89] found id: "47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e"
	I0916 11:55:39.655880  360990 cri.go:89] found id: ""
	I0916 11:55:39.655890  360990 logs.go:276] 1 containers: [47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e]
	I0916 11:55:39.655953  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.659315  360990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:55:39.659382  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:55:39.692439  360990 cri.go:89] found id: "532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606"
	I0916 11:55:39.692459  360990 cri.go:89] found id: ""
	I0916 11:55:39.692466  360990 logs.go:276] 1 containers: [532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606]
	I0916 11:55:39.692516  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.696024  360990 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:55:39.696085  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:55:39.729944  360990 cri.go:89] found id: "55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b"
	I0916 11:55:39.729969  360990 cri.go:89] found id: "4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad"
	I0916 11:55:39.729975  360990 cri.go:89] found id: ""
	I0916 11:55:39.729983  360990 logs.go:276] 2 containers: [55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b 4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad]
	I0916 11:55:39.730036  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.733567  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.736906  360990 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:55:39.736962  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:55:39.771856  360990 cri.go:89] found id: "e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e"
	I0916 11:55:39.771875  360990 cri.go:89] found id: ""
	I0916 11:55:39.771882  360990 logs.go:276] 1 containers: [e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e]
	I0916 11:55:39.771921  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:39.775489  360990 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:55:39.775514  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:55:39.867723  360990 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:55:39.867759  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:55:39.929128  360990 logs.go:123] Gathering logs for kubelet ...
	I0916 11:55:39.929167  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:55:39.993511  360990 logs.go:123] Gathering logs for kube-apiserver [03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c] ...
	I0916 11:55:39.993557  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c"
	I0916 11:55:40.035780  360990 logs.go:123] Gathering logs for kube-proxy [ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec] ...
	I0916 11:55:40.035810  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec"
	I0916 11:55:40.070520  360990 logs.go:123] Gathering logs for kindnet [532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606] ...
	I0916 11:55:40.070549  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606"
	I0916 11:55:40.111104  360990 logs.go:123] Gathering logs for storage-provisioner [55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b] ...
	I0916 11:55:40.111134  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b"
	I0916 11:55:40.145004  360990 logs.go:123] Gathering logs for etcd [044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b] ...
	I0916 11:55:40.145034  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b"
	I0916 11:55:40.183010  360990 logs.go:123] Gathering logs for coredns [15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37] ...
	I0916 11:55:40.183040  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37"
	I0916 11:55:40.218379  360990 logs.go:123] Gathering logs for kube-controller-manager [47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e] ...
	I0916 11:55:40.218407  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e"
	I0916 11:55:40.270835  360990 logs.go:123] Gathering logs for kubernetes-dashboard [e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e] ...
	I0916 11:55:40.270866  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e"
	I0916 11:55:40.307973  360990 logs.go:123] Gathering logs for container status ...
	I0916 11:55:40.308001  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:55:40.358104  360990 logs.go:123] Gathering logs for dmesg ...
	I0916 11:55:40.358136  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:55:40.383086  360990 logs.go:123] Gathering logs for kube-scheduler [1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf] ...
	I0916 11:55:40.383123  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf"
	I0916 11:55:40.424136  360990 logs.go:123] Gathering logs for storage-provisioner [4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad] ...
	I0916 11:55:40.424163  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad"
	I0916 11:55:42.958138  360990 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:55:42.962491  360990 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:55:42.963348  360990 api_server.go:141] control plane version: v1.31.1
	I0916 11:55:42.963372  360990 api_server.go:131] duration metric: took 3.528093491s to wait for apiserver health ...
	I0916 11:55:42.963381  360990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:55:42.963402  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:55:42.963446  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:55:42.997003  360990 cri.go:89] found id: "03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c"
	I0916 11:55:42.997031  360990 cri.go:89] found id: ""
	I0916 11:55:42.997041  360990 logs.go:276] 1 containers: [03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c]
	I0916 11:55:42.997087  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.000476  360990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:55:43.000541  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:55:43.035945  360990 cri.go:89] found id: "044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b"
	I0916 11:55:43.035969  360990 cri.go:89] found id: ""
	I0916 11:55:43.035978  360990 logs.go:276] 1 containers: [044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b]
	I0916 11:55:43.036029  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.039752  360990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:55:43.039815  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:55:43.073328  360990 cri.go:89] found id: "15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37"
	I0916 11:55:43.073377  360990 cri.go:89] found id: ""
	I0916 11:55:43.073386  360990 logs.go:276] 1 containers: [15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37]
	I0916 11:55:43.073427  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.076706  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:55:43.076762  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:55:43.109905  360990 cri.go:89] found id: "1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf"
	I0916 11:55:43.109932  360990 cri.go:89] found id: ""
	I0916 11:55:43.109943  360990 logs.go:276] 1 containers: [1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf]
	I0916 11:55:43.110001  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.113551  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:55:43.113628  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:55:43.147156  360990 cri.go:89] found id: "ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec"
	I0916 11:55:43.147176  360990 cri.go:89] found id: ""
	I0916 11:55:43.147184  360990 logs.go:276] 1 containers: [ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec]
	I0916 11:55:43.147240  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.151098  360990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:55:43.151175  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:55:43.184960  360990 cri.go:89] found id: "47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e"
	I0916 11:55:43.184987  360990 cri.go:89] found id: ""
	I0916 11:55:43.184995  360990 logs.go:276] 1 containers: [47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e]
	I0916 11:55:43.185051  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.188712  360990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:55:43.188774  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:55:43.223770  360990 cri.go:89] found id: "532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606"
	I0916 11:55:43.223794  360990 cri.go:89] found id: ""
	I0916 11:55:43.223803  360990 logs.go:276] 1 containers: [532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606]
	I0916 11:55:43.223856  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.227350  360990 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:55:43.227405  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:55:43.262493  360990 cri.go:89] found id: "55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b"
	I0916 11:55:43.262514  360990 cri.go:89] found id: "4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad"
	I0916 11:55:43.262518  360990 cri.go:89] found id: ""
	I0916 11:55:43.262525  360990 logs.go:276] 2 containers: [55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b 4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad]
	I0916 11:55:43.262612  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.266089  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.269106  360990 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:55:43.269161  360990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:55:43.308015  360990 cri.go:89] found id: "e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e"
	I0916 11:55:43.308037  360990 cri.go:89] found id: ""
	I0916 11:55:43.308046  360990 logs.go:276] 1 containers: [e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e]
	I0916 11:55:43.308103  360990 ssh_runner.go:195] Run: which crictl
	I0916 11:55:43.311640  360990 logs.go:123] Gathering logs for kube-apiserver [03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c] ...
	I0916 11:55:43.311671  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c"
	I0916 11:55:43.355156  360990 logs.go:123] Gathering logs for etcd [044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b] ...
	I0916 11:55:43.355196  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b"
	I0916 11:55:43.394109  360990 logs.go:123] Gathering logs for kube-scheduler [1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf] ...
	I0916 11:55:43.394145  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf"
	I0916 11:55:43.429940  360990 logs.go:123] Gathering logs for kube-controller-manager [47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e] ...
	I0916 11:55:43.429966  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e"
	I0916 11:55:43.480982  360990 logs.go:123] Gathering logs for storage-provisioner [55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b] ...
	I0916 11:55:43.481013  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b"
	I0916 11:55:43.515419  360990 logs.go:123] Gathering logs for storage-provisioner [4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad] ...
	I0916 11:55:43.515468  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad"
	I0916 11:55:43.550940  360990 logs.go:123] Gathering logs for dmesg ...
	I0916 11:55:43.550967  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:55:43.572632  360990 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:55:43.572666  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:55:43.664941  360990 logs.go:123] Gathering logs for kubelet ...
	I0916 11:55:43.664984  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:55:43.735966  360990 logs.go:123] Gathering logs for coredns [15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37] ...
	I0916 11:55:43.736010  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37"
	I0916 11:55:43.770990  360990 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:55:43.771026  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:55:43.830051  360990 logs.go:123] Gathering logs for container status ...
	I0916 11:55:43.830085  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:55:43.868293  360990 logs.go:123] Gathering logs for kube-proxy [ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec] ...
	I0916 11:55:43.868330  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec"
	I0916 11:55:43.902908  360990 logs.go:123] Gathering logs for kindnet [532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606] ...
	I0916 11:55:43.902932  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606"
	I0916 11:55:43.940450  360990 logs.go:123] Gathering logs for kubernetes-dashboard [e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e] ...
	I0916 11:55:43.940485  360990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e"
	I0916 11:55:46.480784  360990 system_pods.go:59] 9 kube-system pods found
	I0916 11:55:46.480822  360990 system_pods.go:61] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:55:46.480827  360990 system_pods.go:61] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:55:46.480832  360990 system_pods.go:61] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:55:46.480836  360990 system_pods.go:61] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:55:46.480840  360990 system_pods.go:61] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:55:46.480842  360990 system_pods.go:61] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:55:46.480845  360990 system_pods.go:61] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:55:46.480851  360990 system_pods.go:61] "metrics-server-6867b74b74-xcgqq" [52862a21-d441-454e-8a52-0179b6f6c093] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:55:46.480855  360990 system_pods.go:61] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:55:46.480862  360990 system_pods.go:74] duration metric: took 3.517475617s to wait for pod list to return data ...
	I0916 11:55:46.480869  360990 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:55:46.483437  360990 default_sa.go:45] found service account: "default"
	I0916 11:55:46.483461  360990 default_sa.go:55] duration metric: took 2.586367ms for default service account to be created ...
	I0916 11:55:46.483470  360990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:55:46.487801  360990 system_pods.go:86] 9 kube-system pods found
	I0916 11:55:46.487830  360990 system_pods.go:89] "coredns-7c65d6cfc9-sfxnk" [ec2c3f40-5323-4dce-ae07-29c4537f3067] Running
	I0916 11:55:46.487835  360990 system_pods.go:89] "etcd-no-preload-179932" [3af42b3e-f310-4932-b24a-85d3b55e19a0] Running
	I0916 11:55:46.487839  360990 system_pods.go:89] "kindnet-2678b" [28d0afc4-03fd-4b6e-8ced-8b440d6153ff] Running
	I0916 11:55:46.487843  360990 system_pods.go:89] "kube-apiserver-no-preload-179932" [7e6f5af8-a459-4b8b-b1b8-5df32f37cfe3] Running
	I0916 11:55:46.487847  360990 system_pods.go:89] "kube-controller-manager-no-preload-179932" [313b35c1-1982-4f0a-a0f9-ffde80f7989e] Running
	I0916 11:55:46.487850  360990 system_pods.go:89] "kube-proxy-ckd46" [2c024fac-4113-4c1b-8b50-3e066e7b9b67] Running
	I0916 11:55:46.487853  360990 system_pods.go:89] "kube-scheduler-no-preload-179932" [969d30fc-6575-4f1f-bcd0-32e8132681e9] Running
	I0916 11:55:46.487859  360990 system_pods.go:89] "metrics-server-6867b74b74-xcgqq" [52862a21-d441-454e-8a52-0179b6f6c093] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:55:46.487865  360990 system_pods.go:89] "storage-provisioner" [040e8794-ddea-4f91-b709-cb999b3c71d5] Running
	I0916 11:55:46.487872  360990 system_pods.go:126] duration metric: took 4.397447ms to wait for k8s-apps to be running ...
	I0916 11:55:46.487879  360990 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:55:46.487932  360990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:55:46.499279  360990 system_svc.go:56] duration metric: took 11.392057ms WaitForService to wait for kubelet
	I0916 11:55:46.499304  360990 kubeadm.go:582] duration metric: took 4m14.372848286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:55:46.499322  360990 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:55:46.502339  360990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:55:46.502365  360990 node_conditions.go:123] node cpu capacity is 8
	I0916 11:55:46.502377  360990 node_conditions.go:105] duration metric: took 3.050282ms to run NodePressure ...
	I0916 11:55:46.502391  360990 start.go:241] waiting for startup goroutines ...
	I0916 11:55:46.502399  360990 start.go:246] waiting for cluster config update ...
	I0916 11:55:46.502413  360990 start.go:255] writing updated cluster config ...
	I0916 11:55:46.502702  360990 ssh_runner.go:195] Run: rm -f paused
	I0916 11:55:46.509552  360990 out.go:177] * Done! kubectl is now configured to use "no-preload-179932" cluster and "default" namespace by default
	E0916 11:55:46.511122  360990 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:54:23 no-preload-179932 crio[661]: time="2024-09-16 11:54:23.738247203Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:54:38 no-preload-179932 crio[661]: time="2024-09-16 11:54:38.711229436Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1389bea6-6601-4729-939f-b5025bd30b49 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:54:38 no-preload-179932 crio[661]: time="2024-09-16 11:54:38.711486602Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1389bea6-6601-4729-939f-b5025bd30b49 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.711512522Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a81398a7-816b-458a-a701-7246254066c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.711726100Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a81398a7-816b-458a-a701-7246254066c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.712488384Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5f70396d-57e3-44f0-ac00-38f83e92c7ed name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.712706869Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5f70396d-57e3-44f0-ac00-38f83e92c7ed name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.713372604Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf/dashboard-metrics-scraper" id=e13e5f07-3b49-4856-af51-976ced80ed86 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.713506681Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.763393194Z" level=info msg="Created container 3d9df35ef45fc63d5e716762a83ebff7489385dc2e20b02258084f16b717c395: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf/dashboard-metrics-scraper" id=e13e5f07-3b49-4856-af51-976ced80ed86 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.764064934Z" level=info msg="Starting container: 3d9df35ef45fc63d5e716762a83ebff7489385dc2e20b02258084f16b717c395" id=19b82650-902a-4c00-9a67-9495d583ca4c name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:54:46 no-preload-179932 crio[661]: time="2024-09-16 11:54:46.769489055Z" level=info msg="Started container" PID=2369 containerID=3d9df35ef45fc63d5e716762a83ebff7489385dc2e20b02258084f16b717c395 description=kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf/dashboard-metrics-scraper id=19b82650-902a-4c00-9a67-9495d583ca4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8852400f1f6755afc00f357cea2de56821581ebbfafa65f43832a8a680de55b0
	Sep 16 11:54:46 no-preload-179932 conmon[2357]: conmon 3d9df35ef45fc63d5e71 <ninfo>: container 2369 exited with status 1
	Sep 16 11:54:47 no-preload-179932 crio[661]: time="2024-09-16 11:54:47.283216379Z" level=info msg="Removing container: 7335258a7f587aae1e691dd9103f71806f7975f67876bcda4842b9fdc847138c" id=14ae1429-83ae-4b9f-969c-f2b43a077ec1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 11:54:47 no-preload-179932 crio[661]: time="2024-09-16 11:54:47.296228550Z" level=info msg="Removed container 7335258a7f587aae1e691dd9103f71806f7975f67876bcda4842b9fdc847138c: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf/dashboard-metrics-scraper" id=14ae1429-83ae-4b9f-969c-f2b43a077ec1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 11:54:51 no-preload-179932 crio[661]: time="2024-09-16 11:54:51.711175873Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c0b980a6-4bd2-482c-bdff-2966765b9a23 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:54:51 no-preload-179932 crio[661]: time="2024-09-16 11:54:51.711475396Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c0b980a6-4bd2-482c-bdff-2966765b9a23 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:03 no-preload-179932 crio[661]: time="2024-09-16 11:55:03.711425245Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e21fbdb0-0c2b-4a2c-8e5a-5cf9306b7585 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:03 no-preload-179932 crio[661]: time="2024-09-16 11:55:03.711719975Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e21fbdb0-0c2b-4a2c-8e5a-5cf9306b7585 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:18 no-preload-179932 crio[661]: time="2024-09-16 11:55:18.710616501Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=620b5c55-bcf5-4fe2-88e8-6afcdd620c91 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:18 no-preload-179932 crio[661]: time="2024-09-16 11:55:18.710900162Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=620b5c55-bcf5-4fe2-88e8-6afcdd620c91 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:32 no-preload-179932 crio[661]: time="2024-09-16 11:55:32.711151658Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=918e909b-45fc-4263-9a14-430a8769d4e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:32 no-preload-179932 crio[661]: time="2024-09-16 11:55:32.711384654Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=918e909b-45fc-4263-9a14-430a8769d4e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:47 no-preload-179932 crio[661]: time="2024-09-16 11:55:47.711687901Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=74de4b4c-bd8c-4249-8282-3c9e45bbf1c1 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:55:47 no-preload-179932 crio[661]: time="2024-09-16 11:55:47.712002616Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=74de4b4c-bd8c-4249-8282-3c9e45bbf1c1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	3d9df35ef45fc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           About a minute ago   Exited              dashboard-metrics-scraper   5                   8852400f1f675       dashboard-metrics-scraper-7c96f5b85b-9w6gf
	55693be81ad24       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           3 minutes ago        Running             storage-provisioner         2                   b99ad50daa477       storage-provisioner
	e9d31dabe7e29       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   4 minutes ago        Running             kubernetes-dashboard        0                   36aecac677a96       kubernetes-dashboard-695b96c756-qznkx
	15e8cb96e2cdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           4 minutes ago        Running             coredns                     1                   7866ba24314b8       coredns-7c65d6cfc9-sfxnk
	532d1da320023       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                           4 minutes ago        Running             kindnet-cni                 1                   d123dee36f3f3       kindnet-2678b
	4a3f633c2c282       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           4 minutes ago        Exited              storage-provisioner         1                   b99ad50daa477       storage-provisioner
	ba5e847491e03       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                           4 minutes ago        Running             kube-proxy                  1                   4fd0144f6a158       kube-proxy-ckd46
	1378f63f5caaa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                           4 minutes ago        Running             kube-scheduler              1                   76b6ca21b67f0       kube-scheduler-no-preload-179932
	044c0f0593cd5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                           4 minutes ago        Running             etcd                        1                   2be53e1e2f91a       etcd-no-preload-179932
	47b0001f2e7eb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                           4 minutes ago        Running             kube-controller-manager     1                   f30e85b0cfe68       kube-controller-manager-no-preload-179932
	03f6cea9bc325       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                           4 minutes ago        Running             kube-apiserver              1                   308890226defe       kube-apiserver-no-preload-179932
	
	
	==> coredns [15e8cb96e2cdf55c88f4e203775f1b3c8d952e6ccd66fdd41ede42a681b08d37] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44323 - 49053 "HINFO IN 5537239534745560504.1433116384820237269. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010428193s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[860733758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:51:37.004) (total time: 30001ms):
	Trace[860733758]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:52:07.004)
	Trace[860733758]: [30.00113053s] [30.00113053s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[891116948]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:51:37.005) (total time: 30000ms):
	Trace[891116948]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:52:07.005)
	Trace[891116948]: [30.000712413s] [30.000712413s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[46219916]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:51:37.004) (total time: 30001ms):
	Trace[46219916]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:52:07.005)
	Trace[46219916]: [30.0011914s] [30.0011914s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               no-preload-179932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-179932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-179932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_50_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:50:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-179932
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:55:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:52:06 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:52:06 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:52:06 +0000   Mon, 16 Sep 2024 11:50:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:52:06 +0000   Mon, 16 Sep 2024 11:51:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-179932
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 349c6647bc21485091955047cd63b370
	  System UUID:                93f9cbba-c2f8-4376-ab54-e687ad96b58b
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sfxnk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m5s
	  kube-system                 etcd-no-preload-179932                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m10s
	  kube-system                 kindnet-2678b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m5s
	  kube-system                 kube-apiserver-no-preload-179932              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-controller-manager-no-preload-179932     200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-ckd46                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-no-preload-179932              100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 metrics-server-6867b74b74-xcgqq               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m41s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kubernetes-dashboard        dashboard-metrics-scraper-7c96f5b85b-9w6gf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-qznkx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   Starting                 4m21s                  kube-proxy       
	  Normal   Starting                 5m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m11s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     5m10s                  kubelet          Node no-preload-179932 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m10s                  kubelet          Node no-preload-179932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  5m10s                  kubelet          Node no-preload-179932 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           5m6s                   node-controller  Node no-preload-179932 event: Registered Node no-preload-179932 in Controller
	  Normal   NodeReady                4m48s                  kubelet          Node no-preload-179932 status is now: NodeReady
	  Normal   Starting                 4m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m27s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m27s (x9 over 4m27s)  kubelet          Node no-preload-179932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m27s (x7 over 4m27s)  kubelet          Node no-preload-179932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node no-preload-179932 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m19s                  node-controller  Node no-preload-179932 event: Registered Node no-preload-179932 in Controller
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +1.027886] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000007] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +2.015855] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +4.223671] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000005] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +8.191398] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [044c0f0593cd5e24d2026f9d5543067b2db87d41f9bdf46d4a09f377f41e975b] <==
	{"level":"info","ts":"2024-09-16T11:51:32.812625Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:51:32.812947Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:51:32.813024Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:51:32.809756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2024-09-16T11:51:32.813259Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2024-09-16T11:51:32.813407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:51:32.813465Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:51:32.815713Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:51:32.816332Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:51:33.910717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:51:33.910868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:51:33.910939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:51:33.910959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:51:33.910968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2024-09-16T11:51:33.910980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:51:33.910991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2024-09-16T11:51:33.912692Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:51:33.912707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:51:33.912700Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-179932 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:51:33.912989Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:51:33.913019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:51:33.913957Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:51:33.914117Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:51:33.914806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:51:33.915036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 11:55:59 up  1:38,  0 users,  load average: 1.31, 0.92, 0.85
	Linux no-preload-179932 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [532d1da320023f7150dcd25c90425856da17a281856ae7b0839109aaad981606] <==
	I0916 11:53:57.494199       1 main.go:299] handling current node
	I0916 11:54:07.502948       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:54:07.502988       1 main.go:299] handling current node
	I0916 11:54:17.497447       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:54:17.497492       1 main.go:299] handling current node
	I0916 11:54:27.501454       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:54:27.501493       1 main.go:299] handling current node
	I0916 11:54:37.494227       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:54:37.494275       1 main.go:299] handling current node
	I0916 11:54:47.494860       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:54:47.494916       1 main.go:299] handling current node
	I0916 11:54:57.497500       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:54:57.497531       1 main.go:299] handling current node
	I0916 11:55:07.502776       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:55:07.502813       1 main.go:299] handling current node
	I0916 11:55:17.500768       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:55:17.500809       1 main.go:299] handling current node
	I0916 11:55:27.502661       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:55:27.502700       1 main.go:299] handling current node
	I0916 11:55:37.494915       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:55:37.494956       1 main.go:299] handling current node
	I0916 11:55:47.499221       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:55:47.499259       1 main.go:299] handling current node
	I0916 11:55:57.501427       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:55:57.501464       1 main.go:299] handling current node
	
	
	==> kube-apiserver [03f6cea9bc32550b5dcd73cb6dbeff59ee6318846b6cd86be24097266435e69c] <==
	I0916 11:51:37.611522       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:51:37.619253       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:51:37.651641       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.250.5"}
	I0916 11:51:37.665455       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.94.138"}
	I0916 11:51:40.060466       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:51:40.209483       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:51:40.258458       1 controller.go:615] quota admission added evaluator for: endpoints
	W0916 11:52:36.418122       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:52:36.418156       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:52:36.418187       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:52:36.418202       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:52:36.419333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:52:36.419376       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 11:54:36.420236       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:54:36.420295       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0916 11:54:36.420236       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:54:36.420375       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:54:36.421391       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:54:36.421407       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [47b0001f2e7ebce78ed1137014ae85724292e26927693d2d69b0a9b731ef2a0e] <==
	I0916 11:52:33.034423       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="69.244µs"
	I0916 11:52:37.312465       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="64.411µs"
	E0916 11:52:39.873055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:52:40.288789       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:52:43.720844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="80.689µs"
	I0916 11:53:09.720821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="62.883µs"
	E0916 11:53:09.878790       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:53:10.296684       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:53:19.127127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="61.602µs"
	I0916 11:53:22.720201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="82.792µs"
	I0916 11:53:27.312770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="75.399µs"
	E0916 11:53:39.884589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:53:40.303508       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:54:09.889939       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:54:10.310832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:54:38.721078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="93.234µs"
	E0916 11:54:39.896176       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:54:40.317371       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:54:47.294658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="73.77µs"
	I0916 11:54:48.296170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="66.346µs"
	I0916 11:54:51.721709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="171.9µs"
	E0916 11:55:09.901061       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:55:10.325020       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:55:39.906814       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:55:40.332075       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ba5e847491e03ee49b142224175f41f296eae77f3c1bc8365af4ba0b622269ec] <==
	I0916 11:51:36.920784       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:51:37.299103       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:51:37.299186       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:51:37.403683       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:51:37.403747       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:51:37.406484       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:51:37.406925       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:51:37.406959       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:51:37.408593       1 config.go:199] "Starting service config controller"
	I0916 11:51:37.408633       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:51:37.408677       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:51:37.408686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:51:37.409783       1 config.go:328] "Starting node config controller"
	I0916 11:51:37.409802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:51:37.508771       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:51:37.508842       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:51:37.510329       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1378f63f5caaaf168a4050488a600a376efadb44005d8f209a52815c906a05cf] <==
	I0916 11:51:33.628945       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:51:35.300947       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:51:35.301096       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:51:35.301141       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:51:35.301178       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:51:35.497150       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:51:35.497287       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:51:35.500766       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:51:35.500896       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:51:35.500925       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:51:35.500949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:51:35.601560       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:54:59 no-preload-179932 kubelet[795]: E0916 11:54:59.711307     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9w6gf_kubernetes-dashboard(da89bca3-cdf2-47c5-978b-14df8c4fd96a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf" podUID="da89bca3-cdf2-47c5-978b-14df8c4fd96a"
	Sep 16 11:55:01 no-preload-179932 kubelet[795]: E0916 11:55:01.744790     795 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487701744583183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:01 no-preload-179932 kubelet[795]: E0916 11:55:01.744830     795 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487701744583183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:03 no-preload-179932 kubelet[795]: E0916 11:55:03.711947     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xcgqq" podUID="52862a21-d441-454e-8a52-0179b6f6c093"
	Sep 16 11:55:11 no-preload-179932 kubelet[795]: E0916 11:55:11.746025     795 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487711745823227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:11 no-preload-179932 kubelet[795]: E0916 11:55:11.746065     795 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487711745823227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:12 no-preload-179932 kubelet[795]: I0916 11:55:12.711061     795 scope.go:117] "RemoveContainer" containerID="3d9df35ef45fc63d5e716762a83ebff7489385dc2e20b02258084f16b717c395"
	Sep 16 11:55:12 no-preload-179932 kubelet[795]: E0916 11:55:12.711281     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9w6gf_kubernetes-dashboard(da89bca3-cdf2-47c5-978b-14df8c4fd96a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf" podUID="da89bca3-cdf2-47c5-978b-14df8c4fd96a"
	Sep 16 11:55:18 no-preload-179932 kubelet[795]: E0916 11:55:18.711247     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xcgqq" podUID="52862a21-d441-454e-8a52-0179b6f6c093"
	Sep 16 11:55:21 no-preload-179932 kubelet[795]: E0916 11:55:21.747208     795 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487721746992874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:21 no-preload-179932 kubelet[795]: E0916 11:55:21.747249     795 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487721746992874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:27 no-preload-179932 kubelet[795]: I0916 11:55:27.710485     795 scope.go:117] "RemoveContainer" containerID="3d9df35ef45fc63d5e716762a83ebff7489385dc2e20b02258084f16b717c395"
	Sep 16 11:55:27 no-preload-179932 kubelet[795]: E0916 11:55:27.710735     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9w6gf_kubernetes-dashboard(da89bca3-cdf2-47c5-978b-14df8c4fd96a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf" podUID="da89bca3-cdf2-47c5-978b-14df8c4fd96a"
	Sep 16 11:55:31 no-preload-179932 kubelet[795]: E0916 11:55:31.748903     795 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487731748717525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:31 no-preload-179932 kubelet[795]: E0916 11:55:31.748945     795 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487731748717525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:32 no-preload-179932 kubelet[795]: E0916 11:55:32.711687     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xcgqq" podUID="52862a21-d441-454e-8a52-0179b6f6c093"
	Sep 16 11:55:41 no-preload-179932 kubelet[795]: I0916 11:55:41.711203     795 scope.go:117] "RemoveContainer" containerID="3d9df35ef45fc63d5e716762a83ebff7489385dc2e20b02258084f16b717c395"
	Sep 16 11:55:41 no-preload-179932 kubelet[795]: E0916 11:55:41.711444     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9w6gf_kubernetes-dashboard(da89bca3-cdf2-47c5-978b-14df8c4fd96a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf" podUID="da89bca3-cdf2-47c5-978b-14df8c4fd96a"
	Sep 16 11:55:41 no-preload-179932 kubelet[795]: E0916 11:55:41.751065     795 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487741750428197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:41 no-preload-179932 kubelet[795]: E0916 11:55:41.751108     795 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487741750428197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:47 no-preload-179932 kubelet[795]: E0916 11:55:47.712273     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xcgqq" podUID="52862a21-d441-454e-8a52-0179b6f6c093"
	Sep 16 11:55:51 no-preload-179932 kubelet[795]: E0916 11:55:51.753199     795 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487751752973781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:51 no-preload-179932 kubelet[795]: E0916 11:55:51.753238     795 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487751752973781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154189,},InodesUsed:&UInt64Value{Value:59,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:55:56 no-preload-179932 kubelet[795]: I0916 11:55:56.711027     795 scope.go:117] "RemoveContainer" containerID="3d9df35ef45fc63d5e716762a83ebff7489385dc2e20b02258084f16b717c395"
	Sep 16 11:55:56 no-preload-179932 kubelet[795]: E0916 11:55:56.711213     795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9w6gf_kubernetes-dashboard(da89bca3-cdf2-47c5-978b-14df8c4fd96a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9w6gf" podUID="da89bca3-cdf2-47c5-978b-14df8c4fd96a"
	
	
	==> kubernetes-dashboard [e9d31dabe7e29e749c99bf50400d9cc674eac6a0c5a3b4e30aa7c8484f67e39e] <==
	2024/09/16 11:51:49 Using namespace: kubernetes-dashboard
	2024/09/16 11:51:49 Using in-cluster config to connect to apiserver
	2024/09/16 11:51:49 Using secret token for csrf signing
	2024/09/16 11:51:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:51:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:51:49 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 11:51:49 Generating JWE encryption key
	2024/09/16 11:51:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:51:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:51:49 Initializing JWE encryption key from synchronized object
	2024/09/16 11:51:49 Creating in-cluster Sidecar client
	2024/09/16 11:51:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:51:49 Serving insecurely on HTTP port: 9090
	2024/09/16 11:52:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:52:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:53:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:53:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:54:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:54:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:55:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:55:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:51:49 Starting overwatch
	
	
	==> storage-provisioner [4a3f633c2c282d0d3ef1888e5772b186e7e146214dc40ef47273d47a1c9be1ad] <==
	I0916 11:51:36.802939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 11:52:06.806875       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [55693be81ad2406fae9a7635a02fa15b37dcdfb049d7354c866aff8b50903c4b] <==
	I0916 11:52:07.040244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:52:07.047849       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:52:07.047900       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:52:24.444701       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:52:24.444827       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6492543-a96c-4e35-8fc0-19e6c7bc9c6d", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-179932_ce7ac385-6e3f-462d-8d3a-b3fe40200d83 became leader
	I0916 11:52:24.444881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-179932_ce7ac385-6e3f-462d-8d3a-b3fe40200d83!
	I0916 11:52:24.545159       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-179932_ce7ac385-6e3f-462d-8d3a-b3fe40200d83!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-179932 -n no-preload-179932
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (531.745µs)
helpers_test.go:263: kubectl --context no-preload-179932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-451928 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-451928 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (743.187µs)
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-451928 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-451928
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-451928:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae",
	        "Created": "2024-09-16T11:56:10.793026862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 370642,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:56:10.911717057Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hosts",
	        "LogPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae-json.log",
	        "Name": "/default-k8s-diff-port-451928",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-451928:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-451928",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-451928",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-451928/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-451928",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c295616087e44bffb82a8e4e82399f08c9ad2a364df3b7343d36ba13396023a6",
	            "SandboxKey": "/var/run/docker/netns/c295616087e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-451928": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "22c51b08b0ca2daf580627f39cd71ae241a476b62a744a7a3bfd63c1aaadfdfe",
	                    "EndpointID": "576e0db3957872bf299445aa83a23070656403cdbf34945b607d06891920fd68",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-451928",
	                        "5e4edb1ce4fb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25: (1.147275699s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467        | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-179932             | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-179932                  | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-179932 image list                           | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:55 UTC | 16 Sep 24 11:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:56:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:56:05.303544  369925 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:56:05.303695  369925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:05.303707  369925 out.go:358] Setting ErrFile to fd 2...
	I0916 11:56:05.303713  369925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:05.304017  369925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:56:05.304835  369925 out.go:352] Setting JSON to false
	I0916 11:56:05.306135  369925 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5905,"bootTime":1726481860,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:56:05.306265  369925 start.go:139] virtualization: kvm guest
	I0916 11:56:05.308684  369925 out.go:177] * [default-k8s-diff-port-451928] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:56:05.310432  369925 notify.go:220] Checking for updates...
	I0916 11:56:05.310468  369925 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:56:05.311947  369925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:56:05.313397  369925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:56:05.315161  369925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:56:05.316694  369925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:56:05.318120  369925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:56:05.319958  369925 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320054  369925 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320136  369925 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320218  369925 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:56:05.343305  369925 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:56:05.343431  369925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:05.398162  369925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:05.386767708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:05.398269  369925 docker.go:318] overlay module found
	I0916 11:56:05.401236  369925 out.go:177] * Using the docker driver based on user configuration
	I0916 11:56:05.402778  369925 start.go:297] selected driver: docker
	I0916 11:56:05.402792  369925 start.go:901] validating driver "docker" against <nil>
	I0916 11:56:05.402803  369925 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:56:05.403619  369925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:05.458917  369925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:05.449556012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:05.459101  369925 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:56:05.459345  369925 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:56:05.460940  369925 out.go:177] * Using Docker driver with root privileges
	I0916 11:56:05.462262  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:05.462314  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:05.462326  369925 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:56:05.462389  369925 start.go:340] cluster config:
	{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:05.463951  369925 out.go:177] * Starting "default-k8s-diff-port-451928" primary control-plane node in "default-k8s-diff-port-451928" cluster
	I0916 11:56:05.465195  369925 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:56:05.466528  369925 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:56:05.467567  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:05.467607  369925 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:56:05.467620  369925 cache.go:56] Caching tarball of preloaded images
	I0916 11:56:05.467678  369925 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:56:05.467704  369925 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:56:05.467737  369925 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:56:05.467838  369925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	I0916 11:56:05.467865  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json: {Name:mk3f0192a4b7f3d3763c1a6bd15f21266a5e389c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:56:05.488729  369925 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:56:05.488751  369925 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:56:05.488835  369925 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:56:05.488858  369925 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:56:05.488863  369925 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:56:05.488873  369925 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:56:05.488884  369925 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:56:05.554343  369925 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:56:05.554398  369925 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:56:05.554444  369925 start.go:360] acquireMachinesLock for default-k8s-diff-port-451928: {Name:mkd4d5ce5590d094d470576746b410c1fbb05d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:56:05.554565  369925 start.go:364] duration metric: took 95.582µs to acquireMachinesLock for "default-k8s-diff-port-451928"
	I0916 11:56:05.554594  369925 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:56:05.554695  369925 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:56:05.556420  369925 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:56:05.556691  369925 start.go:159] libmachine.API.Create for "default-k8s-diff-port-451928" (driver="docker")
	I0916 11:56:05.556715  369925 client.go:168] LocalClient.Create starting
	I0916 11:56:05.556786  369925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:56:05.556820  369925 main.go:141] libmachine: Decoding PEM data...
	I0916 11:56:05.556841  369925 main.go:141] libmachine: Parsing certificate...
	I0916 11:56:05.556906  369925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:56:05.556940  369925 main.go:141] libmachine: Decoding PEM data...
	I0916 11:56:05.556954  369925 main.go:141] libmachine: Parsing certificate...
	I0916 11:56:05.557289  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:56:05.576970  369925 cli_runner.go:211] docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:56:05.577029  369925 network_create.go:284] running [docker network inspect default-k8s-diff-port-451928] to gather additional debugging logs...
	I0916 11:56:05.577045  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928
	W0916 11:56:05.594489  369925 cli_runner.go:211] docker network inspect default-k8s-diff-port-451928 returned with exit code 1
	I0916 11:56:05.594540  369925 network_create.go:287] error running [docker network inspect default-k8s-diff-port-451928]: docker network inspect default-k8s-diff-port-451928: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-451928 not found
	I0916 11:56:05.594557  369925 network_create.go:289] output of [docker network inspect default-k8s-diff-port-451928]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-451928 not found
	
	** /stderr **
	I0916 11:56:05.594675  369925 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:56:05.613180  369925 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:56:05.614533  369925 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:56:05.615818  369925 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:56:05.616767  369925 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:56:05.617852  369925 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:56:05.618825  369925 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:56:05.620128  369925 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023833e0}
	I0916 11:56:05.620159  369925 network_create.go:124] attempt to create docker network default-k8s-diff-port-451928 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:56:05.620217  369925 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 default-k8s-diff-port-451928
	I0916 11:56:05.688360  369925 network_create.go:108] docker network default-k8s-diff-port-451928 192.168.103.0/24 created
	I0916 11:56:05.688413  369925 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-451928" container
	I0916 11:56:05.688485  369925 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:56:05.707399  369925 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-451928 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:56:05.726926  369925 oci.go:103] Successfully created a docker volume default-k8s-diff-port-451928
	I0916 11:56:05.727024  369925 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-451928-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --entrypoint /usr/bin/test -v default-k8s-diff-port-451928:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:56:06.241480  369925 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-451928
	I0916 11:56:06.241517  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:06.241541  369925 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:56:06.241592  369925 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-451928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:56:10.727699  369925 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-451928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.486050673s)
	I0916 11:56:10.727728  369925 kic.go:203] duration metric: took 4.486185106s to extract preloaded images to volume ...
	W0916 11:56:10.727849  369925 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:56:10.727935  369925 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:56:10.777179  369925 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-451928 --name default-k8s-diff-port-451928 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --network default-k8s-diff-port-451928 --ip 192.168.103.2 --volume default-k8s-diff-port-451928:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:56:11.076240  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Running}}
	I0916 11:56:11.096679  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.115724  369925 cli_runner.go:164] Run: docker exec default-k8s-diff-port-451928 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:56:11.159179  369925 oci.go:144] the created container "default-k8s-diff-port-451928" has a running status.
	I0916 11:56:11.159223  369925 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa...
	I0916 11:56:11.413161  369925 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:56:11.438230  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.465506  369925 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:56:11.465530  369925 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-451928 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:56:11.515063  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.537179  369925 machine.go:93] provisionDockerMachine start ...
	I0916 11:56:11.537281  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.556335  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.556616  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.556640  369925 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:56:11.768876  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:56:11.768902  369925 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-451928"
	I0916 11:56:11.768966  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.788986  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.789249  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.789266  369925 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-451928 && echo "default-k8s-diff-port-451928" | sudo tee /etc/hostname
	I0916 11:56:11.936460  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:56:11.936559  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.954102  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.954288  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.954311  369925 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-451928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-451928/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-451928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:56:12.089644  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:56:12.089677  369925 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:56:12.089715  369925 ubuntu.go:177] setting up certificates
	I0916 11:56:12.089731  369925 provision.go:84] configureAuth start
	I0916 11:56:12.089783  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:12.106669  369925 provision.go:143] copyHostCerts
	I0916 11:56:12.106734  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:56:12.106742  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:56:12.106811  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:56:12.106897  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:56:12.106906  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:56:12.106929  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:56:12.106983  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:56:12.106989  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:56:12.107010  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:56:12.107105  369925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-451928 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-451928 localhost minikube]
	I0916 11:56:12.356779  369925 provision.go:177] copyRemoteCerts
	I0916 11:56:12.356846  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:56:12.356882  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.373979  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:12.474469  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:56:12.498244  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:56:12.520551  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:56:12.543988  369925 provision.go:87] duration metric: took 454.24102ms to configureAuth
	I0916 11:56:12.544015  369925 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:56:12.544171  369925 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:12.544262  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.562970  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:12.563218  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:12.563243  369925 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:56:12.788862  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:56:12.788899  369925 machine.go:96] duration metric: took 1.251694448s to provisionDockerMachine
	I0916 11:56:12.788917  369925 client.go:171] duration metric: took 7.23219201s to LocalClient.Create
	I0916 11:56:12.788941  369925 start.go:167] duration metric: took 7.232248271s to libmachine.API.Create "default-k8s-diff-port-451928"
	I0916 11:56:12.788953  369925 start.go:293] postStartSetup for "default-k8s-diff-port-451928" (driver="docker")
	I0916 11:56:12.788969  369925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:56:12.789043  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:56:12.789093  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.808336  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:12.906746  369925 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:56:12.909982  369925 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:56:12.910020  369925 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:56:12.910032  369925 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:56:12.910040  369925 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:56:12.910054  369925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:56:12.910120  369925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:56:12.910210  369925 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:56:12.910334  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:56:12.919277  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:56:12.943861  369925 start.go:296] duration metric: took 154.890441ms for postStartSetup
	I0916 11:56:12.944234  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:12.962378  369925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	I0916 11:56:12.962654  369925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:56:12.962705  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.979711  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.070255  369925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:56:13.074523  369925 start.go:128] duration metric: took 7.519810319s to createHost
	I0916 11:56:13.074564  369925 start.go:83] releasing machines lock for "default-k8s-diff-port-451928", held for 7.519971551s
	I0916 11:56:13.074634  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:13.092188  369925 ssh_runner.go:195] Run: cat /version.json
	I0916 11:56:13.092231  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:13.092286  369925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:56:13.092341  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:13.111088  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.111589  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.201269  369925 ssh_runner.go:195] Run: systemctl --version
	I0916 11:56:13.281779  369925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:56:13.422613  369925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:56:13.427455  369925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:56:13.446790  369925 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:56:13.446866  369925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:56:13.476675  369925 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:56:13.476703  369925 start.go:495] detecting cgroup driver to use...
	I0916 11:56:13.476733  369925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:56:13.476781  369925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:56:13.491098  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:56:13.501847  369925 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:56:13.501904  369925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:56:13.514875  369925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:56:13.528583  369925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:56:13.608336  369925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:56:13.692657  369925 docker.go:233] disabling docker service ...
	I0916 11:56:13.692728  369925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:56:13.711012  369925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:56:13.722637  369925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:56:13.804004  369925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:56:13.893600  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:56:13.904152  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:56:13.919897  369925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:56:13.919949  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.929206  369925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:56:13.929266  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.938651  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.947671  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.956988  369925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:56:13.965991  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.975358  369925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.990951  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:14.000471  369925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:56:14.008353  369925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:56:14.016281  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:14.094650  369925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:56:14.206633  369925 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:56:14.206706  369925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:56:14.210270  369925 start.go:563] Will wait 60s for crictl version
	I0916 11:56:14.210326  369925 ssh_runner.go:195] Run: which crictl
	I0916 11:56:14.214640  369925 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:56:14.248830  369925 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:56:14.248918  369925 ssh_runner.go:195] Run: crio --version
	I0916 11:56:14.286549  369925 ssh_runner.go:195] Run: crio --version
	I0916 11:56:14.323513  369925 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:56:14.324805  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:56:14.342953  369925 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:56:14.346765  369925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:56:14.357487  369925 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:56:14.357602  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:14.357649  369925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:56:14.419150  369925 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:56:14.419171  369925 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:56:14.419215  369925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:56:14.452381  369925 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:56:14.452404  369925 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:56:14.452411  369925 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.31.1 crio true true} ...
	I0916 11:56:14.452494  369925 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-451928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:56:14.452552  369925 ssh_runner.go:195] Run: crio config
	I0916 11:56:14.492446  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:14.492470  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:14.492478  369925 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:56:14.492498  369925 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-451928 NodeName:default-k8s-diff-port-451928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:56:14.492627  369925 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-451928"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:56:14.492684  369925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:56:14.500882  369925 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:56:14.500998  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:56:14.509117  369925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0916 11:56:14.527099  369925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:56:14.543245  369925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0916 11:56:14.559289  369925 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:56:14.562462  369925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:56:14.572764  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:14.652656  369925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:56:14.665329  369925 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928 for IP: 192.168.103.2
	I0916 11:56:14.665377  369925 certs.go:194] generating shared ca certs ...
	I0916 11:56:14.665401  369925 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.665550  369925 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:56:14.665587  369925 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:56:14.665596  369925 certs.go:256] generating profile certs ...
	I0916 11:56:14.665646  369925 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key
	I0916 11:56:14.665673  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt with IP's: []
	I0916 11:56:14.924148  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt ...
	I0916 11:56:14.924176  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: {Name:mk091e36192745584a10a0223d5da9c4774ead9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.924373  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key ...
	I0916 11:56:14.924390  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key: {Name:mkbdb702fd43b4403c626971aece787eeadc3f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.924500  369925 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28
	I0916 11:56:14.924525  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:56:15.219046  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 ...
	I0916 11:56:15.219072  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28: {Name:mkfe3b390ec90859e5a46e10bdce87c5dc6eb650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.219272  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28 ...
	I0916 11:56:15.219293  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28: {Name:mke646592535caf60542fd88ece7f067c10338a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.219400  369925 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt
	I0916 11:56:15.219505  369925 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key
	I0916 11:56:15.219595  369925 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key
	I0916 11:56:15.219625  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt with IP's: []
	I0916 11:56:15.383658  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt ...
	I0916 11:56:15.383690  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt: {Name:mkd552c3f0141c13b380fd54080a38ef06226dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.383896  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key ...
	I0916 11:56:15.383917  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key: {Name:mk90d5e6b30f7e493c69d8c0bc52df0016cace50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.384122  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:56:15.384172  369925 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:56:15.384188  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:56:15.384223  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:56:15.384256  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:56:15.384287  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:56:15.384343  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:56:15.385028  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:56:15.408492  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:56:15.430724  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:56:15.453586  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:56:15.475502  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:56:15.498275  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:56:15.520879  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:56:15.545503  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:56:15.569096  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:56:15.591066  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:56:15.613068  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:56:15.635329  369925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:56:15.652515  369925 ssh_runner.go:195] Run: openssl version
	I0916 11:56:15.657749  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:56:15.666602  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.669904  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.669960  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.676515  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:56:15.685194  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:56:15.693968  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.697218  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.697277  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.703984  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:56:15.712601  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:56:15.721031  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.724791  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.724850  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.731103  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:56:15.739950  369925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:56:15.742974  369925 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:56:15.743021  369925 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:15.743080  369925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:56:15.743140  369925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:56:15.777866  369925 cri.go:89] found id: ""
	I0916 11:56:15.777934  369925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:56:15.786860  369925 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:56:15.795173  369925 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:56:15.795238  369925 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:56:15.803379  369925 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:56:15.803402  369925 kubeadm.go:157] found existing configuration files:
	
	I0916 11:56:15.803504  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0916 11:56:15.811862  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:56:15.811917  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:56:15.820159  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0916 11:56:15.828222  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:56:15.828277  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:56:15.836121  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0916 11:56:15.844478  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:56:15.844543  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:56:15.852988  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0916 11:56:15.861492  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:56:15.861564  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:56:15.869244  369925 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:56:15.906985  369925 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:56:15.907060  369925 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:56:15.923558  369925 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:56:15.923620  369925 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:56:15.923661  369925 kubeadm.go:310] OS: Linux
	I0916 11:56:15.923700  369925 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:56:15.923757  369925 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:56:15.923839  369925 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:56:15.923893  369925 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:56:15.923967  369925 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:56:15.924033  369925 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:56:15.924118  369925 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:56:15.924201  369925 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:56:15.924284  369925 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:56:15.975434  369925 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:56:15.975560  369925 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:56:15.975752  369925 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:56:15.981635  369925 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:56:15.984105  369925 out.go:235]   - Generating certificates and keys ...
	I0916 11:56:15.984222  369925 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:56:15.984305  369925 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:56:16.250457  369925 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:56:16.375579  369925 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:56:16.472746  369925 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:56:16.569904  369925 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:56:16.903980  369925 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:56:16.904160  369925 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-451928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:56:17.174281  369925 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:56:17.174455  369925 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-451928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:56:17.398938  369925 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:56:17.545679  369925 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:56:17.695489  369925 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:56:17.695611  369925 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:56:17.882081  369925 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:56:17.956171  369925 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:56:18.164752  369925 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:56:18.357126  369925 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:56:18.577865  369925 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:56:18.578456  369925 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:56:18.580900  369925 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:56:18.584022  369925 out.go:235]   - Booting up control plane ...
	I0916 11:56:18.584135  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:56:18.584206  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:56:18.584263  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:56:18.592926  369925 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:56:18.598803  369925 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:56:18.598898  369925 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:56:18.686928  369925 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:56:18.687087  369925 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:56:19.188585  369925 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.688188ms
	I0916 11:56:19.188683  369925 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:56:23.689816  369925 kubeadm.go:310] [api-check] The API server is healthy after 4.501234881s
	I0916 11:56:23.700925  369925 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:56:23.712262  369925 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:56:23.731867  369925 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:56:23.732082  369925 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-451928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:56:23.740449  369925 kubeadm.go:310] [bootstrap-token] Using token: 1cwsrz.9f3rgqsuscyt2usy
	I0916 11:56:23.742197  369925 out.go:235]   - Configuring RBAC rules ...
	I0916 11:56:23.742343  369925 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:56:23.747376  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:56:23.753665  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:56:23.756482  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:56:23.759949  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:56:23.762767  369925 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:56:24.096235  369925 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:56:24.521719  369925 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:56:25.096879  369925 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:56:25.097870  369925 kubeadm.go:310] 
	I0916 11:56:25.097955  369925 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:56:25.097965  369925 kubeadm.go:310] 
	I0916 11:56:25.098061  369925 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:56:25.098070  369925 kubeadm.go:310] 
	I0916 11:56:25.098099  369925 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:56:25.098209  369925 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:56:25.098294  369925 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:56:25.098307  369925 kubeadm.go:310] 
	I0916 11:56:25.098376  369925 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:56:25.098389  369925 kubeadm.go:310] 
	I0916 11:56:25.098467  369925 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:56:25.098478  369925 kubeadm.go:310] 
	I0916 11:56:25.098550  369925 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:56:25.098650  369925 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:56:25.098758  369925 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:56:25.098776  369925 kubeadm.go:310] 
	I0916 11:56:25.098894  369925 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:56:25.099001  369925 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:56:25.099012  369925 kubeadm.go:310] 
	I0916 11:56:25.099131  369925 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1cwsrz.9f3rgqsuscyt2usy \
	I0916 11:56:25.099258  369925 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:56:25.099290  369925 kubeadm.go:310] 	--control-plane 
	I0916 11:56:25.099300  369925 kubeadm.go:310] 
	I0916 11:56:25.099403  369925 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:56:25.099431  369925 kubeadm.go:310] 
	I0916 11:56:25.099631  369925 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1cwsrz.9f3rgqsuscyt2usy \
	I0916 11:56:25.099812  369925 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:56:25.102791  369925 kubeadm.go:310] W0916 11:56:15.903810    1328 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:56:25.103142  369925 kubeadm.go:310] W0916 11:56:15.904614    1328 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:56:25.103423  369925 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:56:25.103527  369925 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:56:25.103557  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:25.103575  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:25.106291  369925 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:56:25.107572  369925 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:56:25.111391  369925 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:56:25.111412  369925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:56:25.128930  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:56:25.326058  369925 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:56:25.326137  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:25.326155  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-451928 minikube.k8s.io/updated_at=2024_09_16T11_56_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=default-k8s-diff-port-451928 minikube.k8s.io/primary=true
	I0916 11:56:25.334061  369925 ops.go:34] apiserver oom_adj: -16
	I0916 11:56:25.415582  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:25.916131  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:26.415875  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:26.916505  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:27.416652  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:27.915672  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.415630  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.916053  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.986004  369925 kubeadm.go:1113] duration metric: took 3.65993128s to wait for elevateKubeSystemPrivileges
	I0916 11:56:28.986059  369925 kubeadm.go:394] duration metric: took 13.24304259s to StartCluster
	I0916 11:56:28.986084  369925 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:28.986181  369925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:56:28.987987  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:28.988247  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:56:28.988275  369925 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:56:28.988246  369925 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:56:28.988353  369925 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-451928"
	I0916 11:56:28.988356  369925 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-451928"
	I0916 11:56:28.988371  369925 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-451928"
	I0916 11:56:28.988376  369925 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-451928"
	I0916 11:56:28.988398  369925 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:56:28.988460  369925 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:28.990224  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:28.990977  369925 out.go:177] * Verifying Kubernetes components...
	I0916 11:56:28.991103  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:28.992313  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:29.018007  369925 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-451928"
	I0916 11:56:29.018056  369925 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:56:29.018152  369925 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:56:29.018499  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:29.019540  369925 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:56:29.019561  369925 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:56:29.019604  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:29.045132  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:29.049324  369925 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:56:29.049370  369925 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:56:29.049429  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:29.068614  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:29.108027  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:56:29.210051  369925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:56:29.214324  369925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:56:29.318299  369925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:56:29.511298  369925 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:56:29.512950  369925 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-451928" to be "Ready" ...
	W0916 11:56:29.613687  369925 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-451928" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0916 11:56:29.613812  369925 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0916 11:56:29.905443  369925 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:56:29.907318  369925 addons.go:510] duration metric: took 919.042056ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:56:31.516192  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:33.516791  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:36.016869  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:38.516958  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:41.016196  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:41.516131  369925 node_ready.go:49] node "default-k8s-diff-port-451928" has status "Ready":"True"
	I0916 11:56:41.516158  369925 node_ready.go:38] duration metric: took 12.003155681s for node "default-k8s-diff-port-451928" to be "Ready" ...
	I0916 11:56:41.516169  369925 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:56:41.522890  369925 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.028991  369925 pod_ready.go:93] pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.029023  369925 pod_ready.go:82] duration metric: took 1.506107319s for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.029038  369925 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.034688  369925 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.034715  369925 pod_ready.go:82] duration metric: took 5.669153ms for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.034729  369925 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.039143  369925 pod_ready.go:93] pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.039165  369925 pod_ready.go:82] duration metric: took 4.428544ms for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.039177  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.043833  369925 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.043858  369925 pod_ready.go:82] duration metric: took 4.669057ms for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.043869  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.116405  369925 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.116427  369925 pod_ready.go:82] duration metric: took 72.552944ms for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.116438  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.516582  369925 pod_ready.go:93] pod "kube-proxy-g84zv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.516608  369925 pod_ready.go:82] duration metric: took 400.162448ms for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.516632  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.916727  369925 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.916750  369925 pod_ready.go:82] duration metric: took 400.110653ms for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.916762  369925 pod_ready.go:39] duration metric: took 2.400579164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:56:43.916774  369925 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:56:43.916822  369925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:56:43.928042  369925 api_server.go:72] duration metric: took 14.939651343s to wait for apiserver process to appear ...
	I0916 11:56:43.928069  369925 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:56:43.928094  369925 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0916 11:56:43.931965  369925 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0916 11:56:43.932933  369925 api_server.go:141] control plane version: v1.31.1
	I0916 11:56:43.932960  369925 api_server.go:131] duration metric: took 4.882393ms to wait for apiserver health ...
	I0916 11:56:43.932970  369925 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:56:44.120099  369925 system_pods.go:59] 9 kube-system pods found
	I0916 11:56:44.120130  369925 system_pods.go:61] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 11:56:44.120135  369925 system_pods.go:61] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 11:56:44.120138  369925 system_pods.go:61] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 11:56:44.120142  369925 system_pods.go:61] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 11:56:44.120146  369925 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 11:56:44.120149  369925 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 11:56:44.120153  369925 system_pods.go:61] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 11:56:44.120156  369925 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 11:56:44.120161  369925 system_pods.go:61] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 11:56:44.120168  369925 system_pods.go:74] duration metric: took 187.191857ms to wait for pod list to return data ...
	I0916 11:56:44.120175  369925 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:56:44.317310  369925 default_sa.go:45] found service account: "default"
	I0916 11:56:44.317348  369925 default_sa.go:55] duration metric: took 197.165786ms for default service account to be created ...
	I0916 11:56:44.317359  369925 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:56:44.519297  369925 system_pods.go:86] 9 kube-system pods found
	I0916 11:56:44.519330  369925 system_pods.go:89] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 11:56:44.519339  369925 system_pods.go:89] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 11:56:44.519344  369925 system_pods.go:89] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 11:56:44.519351  369925 system_pods.go:89] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 11:56:44.519356  369925 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 11:56:44.519362  369925 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 11:56:44.519369  369925 system_pods.go:89] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 11:56:44.519377  369925 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 11:56:44.519382  369925 system_pods.go:89] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 11:56:44.519391  369925 system_pods.go:126] duration metric: took 202.026143ms to wait for k8s-apps to be running ...
	I0916 11:56:44.519404  369925 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:56:44.519454  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:56:44.530991  369925 system_svc.go:56] duration metric: took 11.577254ms WaitForService to wait for kubelet
	I0916 11:56:44.531030  369925 kubeadm.go:582] duration metric: took 15.54264235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:56:44.531057  369925 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:56:44.717684  369925 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:56:44.717712  369925 node_conditions.go:123] node cpu capacity is 8
	I0916 11:56:44.717722  369925 node_conditions.go:105] duration metric: took 186.660851ms to run NodePressure ...
	I0916 11:56:44.717733  369925 start.go:241] waiting for startup goroutines ...
	I0916 11:56:44.717739  369925 start.go:246] waiting for cluster config update ...
	I0916 11:56:44.717749  369925 start.go:255] writing updated cluster config ...
	I0916 11:56:44.718049  369925 ssh_runner.go:195] Run: rm -f paused
	I0916 11:56:44.724825  369925 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-451928" cluster and "default" namespace by default
	E0916 11:56:44.725996  369925 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.622558388Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-tnm2s Namespace:kube-system ID:3f6d7320bc95f7e18391efc33e51215320dbbeeeb8a8f38842192646dcd50333 UID:1ea2318a-d454-406d-bb11-aa3e16dc2950 NetNS:/var/run/netns/b61f6ad8-2f49-4d44-9eb3-900131725eac Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.622722250Z" level=info msg="Checking pod kube-system_coredns-7c65d6cfc9-tnm2s for CNI network kindnet (type=ptp)"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.625917041Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7a288d42-63ec-4354-8113-56e9453ace39 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.626732708Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=02705cf5-fcff-4729-8c10-3d8979c9bdde name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.626957027Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=02705cf5-fcff-4729-8c10-3d8979c9bdde name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.629127213Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-c6qt9/coredns" id=4695c526-57e0-44df-b4c6-58a2d310fce8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.629172912Z" level=info msg="Ran pod sandbox 3f6d7320bc95f7e18391efc33e51215320dbbeeeb8a8f38842192646dcd50333 with infra container: kube-system/coredns-7c65d6cfc9-tnm2s/POD" id=f037ffdd-65f4-4af2-aa54-251c0aba1635 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.629229257Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.630215099Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=94e07138-2ca1-4dcb-bf8c-b3a6c191a567 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.630424760Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=94e07138-2ca1-4dcb-bf8c-b3a6c191a567 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.630953004Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=1ec1b014-1ef0-4b6e-8e11-cb9e6e365dda name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631129634Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=1ec1b014-1ef0-4b6e-8e11-cb9e6e365dda name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631387634Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/447f44726f03971435b98afa47474c7dd5b9992dfeb3ade9078289d4125787ad/merged/etc/passwd: no such file or directory"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631429875Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/447f44726f03971435b98afa47474c7dd5b9992dfeb3ade9078289d4125787ad/merged/etc/group: no such file or directory"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631733377Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-tnm2s/coredns" id=e54c9faa-287e-4fe9-9337-4d48efaf06fc name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631821742Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.710156036Z" level=info msg="Created container 08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a: kube-system/storage-provisioner/storage-provisioner" id=ae06edbb-7f1e-4bf1-a892-41bea84b1c62 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.710838661Z" level=info msg="Starting container: 08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a" id=3537d5b4-7fd1-4045-a722-bc2ada20016a name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.717793917Z" level=info msg="Started container" PID=2272 containerID=08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a description=kube-system/storage-provisioner/storage-provisioner id=3537d5b4-7fd1-4045-a722-bc2ada20016a name=/runtime.v1.RuntimeService/StartContainer sandboxID=76d2ac2b9d946657e773a66ffdfe9830c488cb4be1aa1b58bb27289cb5f0ad15
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.728172616Z" level=info msg="Created container 045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3: kube-system/coredns-7c65d6cfc9-c6qt9/coredns" id=4695c526-57e0-44df-b4c6-58a2d310fce8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.729064907Z" level=info msg="Starting container: 045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3" id=fae55f1f-4e38-402c-8a98-c6626a612f31 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.735728481Z" level=info msg="Started container" PID=2285 containerID=045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3 description=kube-system/coredns-7c65d6cfc9-c6qt9/coredns id=fae55f1f-4e38-402c-8a98-c6626a612f31 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c131646e0d50d73f2a3004247eaab734b98ca5366f46bc44b47bc034c0a2f35b
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.741598902Z" level=info msg="Created container 688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba: kube-system/coredns-7c65d6cfc9-tnm2s/coredns" id=e54c9faa-287e-4fe9-9337-4d48efaf06fc name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.793742567Z" level=info msg="Starting container: 688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba" id=2bf7c9d7-370d-4dff-b233-6c75543286c2 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.801265340Z" level=info msg="Started container" PID=2311 containerID=688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba description=kube-system/coredns-7c65d6cfc9-tnm2s/coredns id=2bf7c9d7-370d-4dff-b233-6c75543286c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f6d7320bc95f7e18391efc33e51215320dbbeeeb8a8f38842192646dcd50333
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	688086cd61e60       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   0                   3f6d7320bc95f       coredns-7c65d6cfc9-tnm2s
	045367a0c66bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   0                   c131646e0d50d       coredns-7c65d6cfc9-c6qt9
	08fa360282467       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       0                   76d2ac2b9d946       storage-provisioner
	9d3593f5e16ca       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   15 seconds ago      Running             kindnet-cni               0                   2cfd58dc984bd       kindnet-rk7s2
	4ec4a11e3a24d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 seconds ago      Running             kube-proxy                0                   abb22584f1ba3       kube-proxy-g84zv
	7928e02dcad53       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   26 seconds ago      Running             kube-apiserver            0                   2567d54afee95       kube-apiserver-default-k8s-diff-port-451928
	8e9e71592f12e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   26 seconds ago      Running             kube-scheduler            0                   205f26e38ad59       kube-scheduler-default-k8s-diff-port-451928
	478b30866eae0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   26 seconds ago      Running             kube-controller-manager   0                   5c43721ed6e3b       kube-controller-manager-default-k8s-diff-port-451928
	245f21f94877c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago      Running             etcd                      0                   9a7d0f2b97773       etcd-default-k8s-diff-port-451928
	
	
	==> coredns [045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47255 - 13176 "HINFO IN 2991928513979281550.1716499013040556013. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008117726s
	
	
	==> coredns [688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56972 - 15234 "HINFO IN 8713296587055300928.4817992167101797270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010563621s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-451928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-451928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-451928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_56_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-451928
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:56:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-451928
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 778d5e12087f47e2ae021c8dc368f974
	  System UUID:                96d27eb1-3e28-4d66-8a00-17bd26589e25
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-c6qt9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16s
	  kube-system                 coredns-7c65d6cfc9-tnm2s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16s
	  kube-system                 etcd-default-k8s-diff-port-451928                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         21s
	  kube-system                 kindnet-rk7s2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-451928             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-451928    200m (2%)     0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-proxy-g84zv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-451928             100m (1%)     0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 15s   kube-proxy       
	  Normal   Starting                 21s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 21s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  21s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17s   node-controller  Node default-k8s-diff-port-451928 event: Registered Node default-k8s-diff-port-451928 in Controller
	  Normal   NodeReady                4s    kubelet          Node default-k8s-diff-port-451928 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +1.027886] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000007] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +2.015855] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +4.223671] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000005] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +8.191398] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [245f21f94877cabfe24fc492e462f5cf8b616b6966f8967725e5ff7548bdc657] <==
	{"level":"info","ts":"2024-09-16T11:56:19.630617Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:56:19.630833Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:56:19.630859Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:56:19.631376Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:56:19.631433Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:56:19.916096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.917236Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.917944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:56:19.917968Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:56:19.918201Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:56:19.918225Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:56:19.918235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.917948Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-451928 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:56:19.918329Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.918361Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.919099Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:56:19.920323Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:56:19.921495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T11:56:19.921594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:56:45 up  1:39,  0 users,  load average: 2.30, 1.34, 1.01
	Linux default-k8s-diff-port-451928 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [9d3593f5e16ca1e3018cf675c2777bfccccb3325b4a618a4fc6f6dab6efde4ab] <==
	I0916 11:56:30.296501       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:56:30.296764       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0916 11:56:30.296916       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:56:30.296930       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:56:30.296951       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:56:30.694194       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:56:30.694222       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:56:30.694230       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:56:30.894328       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:56:30.894449       1 metrics.go:61] Registering metrics
	I0916 11:56:30.894522       1 controller.go:374] Syncing nftables rules
	I0916 11:56:40.698104       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:56:40.698169       1 main.go:299] handling current node
	
	
	==> kube-apiserver [7928e02dcad530c19c0b6ec7e01fbb3385f0324d1232f9672d14062a1addcfd3] <==
	I0916 11:56:21.806408       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:56:21.806414       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:56:21.806420       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:56:21.817011       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:56:21.823263       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:56:21.823291       1 policy_source.go:224] refreshing policies
	E0916 11:56:21.859718       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E0916 11:56:21.878300       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:56:21.907495       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:56:22.081916       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:56:22.710034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:56:22.713839       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:56:22.713860       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:56:23.191226       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:56:23.228428       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:56:23.312905       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:56:23.319069       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:56:23.320261       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:56:23.324351       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:56:23.738612       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:56:24.505110       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:56:24.520375       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:56:24.528400       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:56:29.401242       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:56:29.513882       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [478b30866eae01a91f51089d900b6295124848c3e35c0f765a4cbeb3bf0485fe] <==
	I0916 11:56:28.689382       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 11:56:28.689453       1 shared_informer.go:320] Caches are synced for deployment
	I0916 11:56:28.694790       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:56:28.699949       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:56:28.739103       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 11:56:29.110053       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:56:29.193577       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:56:29.193627       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:56:29.310514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:29.703476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="184.541777ms"
	I0916 11:56:29.710974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.439513ms"
	I0916 11:56:29.711090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.796µs"
	I0916 11:56:29.711219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.08µs"
	I0916 11:56:41.247397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:41.270424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:41.277033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.474µs"
	I0916 11:56:41.278381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.272µs"
	I0916 11:56:41.294102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.258µs"
	I0916 11:56:41.303925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.725µs"
	I0916 11:56:42.534787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.552µs"
	I0916 11:56:42.554508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.70186ms"
	I0916 11:56:42.554631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.387µs"
	I0916 11:56:42.572298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.233175ms"
	I0916 11:56:42.572396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.477µs"
	I0916 11:56:43.689750       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4ec4a11e3a24d5e1ce02dfd1183ec90b7b3781239d805a4d6ccf113375e15922] <==
	I0916 11:56:29.947995       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:56:30.046104       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:56:30.046167       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:56:30.064920       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:56:30.064979       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:56:30.067043       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:56:30.067493       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:56:30.067527       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:56:30.068845       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:56:30.069397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:56:30.069400       1 config.go:199] "Starting service config controller"
	I0916 11:56:30.069422       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:56:30.069563       1 config.go:328] "Starting node config controller"
	I0916 11:56:30.069629       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:56:30.169579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:56:30.169580       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:56:30.169853       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e9e71592f12e81a163e98e2f07e72e1f169a103a6aed393c95dee0e94c5cf50] <==
	W0916 11:56:21.814115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:56:21.814368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:21.812491       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:56:21.814396       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:56:21.814469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:56:21.814554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.687572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:56:22.687618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.748312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:56:22.748354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.798907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:56:22.798950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.852016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:56:22.852069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.912767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:56:22.912812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.917238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:56:22.917276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.971675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:56:22.971722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:23.005307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:56:23.005394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:23.091228       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:56:23.091278       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:56:25.811059       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595344    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e114aae-0ef0-40a3-96c6-f2bc67943f01-kube-proxy\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595415    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j8f\" (UniqueName: \"kubernetes.io/projected/9e114aae-0ef0-40a3-96c6-f2bc67943f01-kube-api-access-t9j8f\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595451    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e114aae-0ef0-40a3-96c6-f2bc67943f01-lib-modules\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595476    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e114aae-0ef0-40a3-96c6-f2bc67943f01-xtables-lock\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.695876    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-cni-cfg\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.695943    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-lib-modules\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.696005    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-xtables-lock\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.696043    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzczw\" (UniqueName: \"kubernetes.io/projected/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-kube-api-access-tzczw\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.705478    1676 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:56:30 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:30.510135    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rk7s2" podStartSLOduration=1.510110896 podStartE2EDuration="1.510110896s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:30.509932137 +0000 UTC m=+6.216515412" watchObservedRunningTime="2024-09-16 11:56:30.510110896 +0000 UTC m=+6.216694175"
	Sep 16 11:56:30 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:30.519577    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g84zv" podStartSLOduration=1.519552813 podStartE2EDuration="1.519552813s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:30.519473363 +0000 UTC m=+6.226056639" watchObservedRunningTime="2024-09-16 11:56:30.519552813 +0000 UTC m=+6.226136092"
	Sep 16 11:56:34 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:34.430244    1676 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487794430057224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:34 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:34.430286    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487794430057224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.240163    1676 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375093    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrgm\" (UniqueName: \"kubernetes.io/projected/4e0063e4-a603-400c-acb8-094aed6b2941-kube-api-access-rfrgm\") pod \"coredns-7c65d6cfc9-c6qt9\" (UID: \"4e0063e4-a603-400c-acb8-094aed6b2941\") " pod="kube-system/coredns-7c65d6cfc9-c6qt9"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375143    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5mk\" (UniqueName: \"kubernetes.io/projected/3e5fdbb0-ecfb-490a-8314-e624e944b4b5-kube-api-access-cw5mk\") pod \"storage-provisioner\" (UID: \"3e5fdbb0-ecfb-490a-8314-e624e944b4b5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375194    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e0063e4-a603-400c-acb8-094aed6b2941-config-volume\") pod \"coredns-7c65d6cfc9-c6qt9\" (UID: \"4e0063e4-a603-400c-acb8-094aed6b2941\") " pod="kube-system/coredns-7c65d6cfc9-c6qt9"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375237    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e5fdbb0-ecfb-490a-8314-e624e944b4b5-tmp\") pod \"storage-provisioner\" (UID: \"3e5fdbb0-ecfb-490a-8314-e624e944b4b5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375269    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea2318a-d454-406d-bb11-aa3e16dc2950-config-volume\") pod \"coredns-7c65d6cfc9-tnm2s\" (UID: \"1ea2318a-d454-406d-bb11-aa3e16dc2950\") " pod="kube-system/coredns-7c65d6cfc9-tnm2s"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375285    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpfm\" (UniqueName: \"kubernetes.io/projected/1ea2318a-d454-406d-bb11-aa3e16dc2950-kube-api-access-qzpfm\") pod \"coredns-7c65d6cfc9-tnm2s\" (UID: \"1ea2318a-d454-406d-bb11-aa3e16dc2950\") " pod="kube-system/coredns-7c65d6cfc9-tnm2s"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.534781    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tnm2s" podStartSLOduration=13.534759159 podStartE2EDuration="13.534759159s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.534367604 +0000 UTC m=+18.240950903" watchObservedRunningTime="2024-09-16 11:56:42.534759159 +0000 UTC m=+18.241342440"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.574588    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.574561361 podStartE2EDuration="13.574561361s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.574522367 +0000 UTC m=+18.281105644" watchObservedRunningTime="2024-09-16 11:56:42.574561361 +0000 UTC m=+18.281144637"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.575038    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c6qt9" podStartSLOduration=13.575025761 podStartE2EDuration="13.575025761s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.561105849 +0000 UTC m=+18.267689125" watchObservedRunningTime="2024-09-16 11:56:42.575025761 +0000 UTC m=+18.281609035"
	Sep 16 11:56:44 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:44.431407    1676 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487804431221811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:44 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:44.431440    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487804431221811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a] <==
	I0916 11:56:41.733859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:56:41.743237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:56:41.743282       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:56:41.802642       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:56:41.802712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18fcca8c-b8bd-4cf6-b5f8-70b48585a383", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097 became leader
	I0916 11:56:41.802842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097!
	I0916 11:56:41.903549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (526.215µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-451928
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-451928:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae",
	        "Created": "2024-09-16T11:56:10.793026862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 370642,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:56:10.911717057Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hosts",
	        "LogPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae-json.log",
	        "Name": "/default-k8s-diff-port-451928",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-451928:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-451928",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-451928",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-451928/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-451928",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c295616087e44bffb82a8e4e82399f08c9ad2a364df3b7343d36ba13396023a6",
	            "SandboxKey": "/var/run/docker/netns/c295616087e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-451928": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "22c51b08b0ca2daf580627f39cd71ae241a476b62a744a7a3bfd63c1aaadfdfe",
	                    "EndpointID": "576e0db3957872bf299445aa83a23070656403cdbf34945b607d06891920fd68",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-451928",
	                        "5e4edb1ce4fb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25: (1.139514129s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467        | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-179932             | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-179932                  | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-179932 image list                           | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:55 UTC | 16 Sep 24 11:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:56:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:56:05.303544  369925 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:56:05.303695  369925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:05.303707  369925 out.go:358] Setting ErrFile to fd 2...
	I0916 11:56:05.303713  369925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:05.304017  369925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:56:05.304835  369925 out.go:352] Setting JSON to false
	I0916 11:56:05.306135  369925 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5905,"bootTime":1726481860,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:56:05.306265  369925 start.go:139] virtualization: kvm guest
	I0916 11:56:05.308684  369925 out.go:177] * [default-k8s-diff-port-451928] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:56:05.310432  369925 notify.go:220] Checking for updates...
	I0916 11:56:05.310468  369925 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:56:05.311947  369925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:56:05.313397  369925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:56:05.315161  369925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:56:05.316694  369925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:56:05.318120  369925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:56:05.319958  369925 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320054  369925 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320136  369925 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320218  369925 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:56:05.343305  369925 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:56:05.343431  369925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:05.398162  369925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:05.386767708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:05.398269  369925 docker.go:318] overlay module found
	I0916 11:56:05.401236  369925 out.go:177] * Using the docker driver based on user configuration
	I0916 11:56:05.402778  369925 start.go:297] selected driver: docker
	I0916 11:56:05.402792  369925 start.go:901] validating driver "docker" against <nil>
	I0916 11:56:05.402803  369925 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:56:05.403619  369925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:05.458917  369925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:05.449556012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:05.459101  369925 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:56:05.459345  369925 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:56:05.460940  369925 out.go:177] * Using Docker driver with root privileges
	I0916 11:56:05.462262  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:05.462314  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:05.462326  369925 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:56:05.462389  369925 start.go:340] cluster config:
	{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:05.463951  369925 out.go:177] * Starting "default-k8s-diff-port-451928" primary control-plane node in "default-k8s-diff-port-451928" cluster
	I0916 11:56:05.465195  369925 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:56:05.466528  369925 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:56:05.467567  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:05.467607  369925 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:56:05.467620  369925 cache.go:56] Caching tarball of preloaded images
	I0916 11:56:05.467678  369925 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:56:05.467704  369925 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:56:05.467737  369925 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:56:05.467838  369925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	I0916 11:56:05.467865  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json: {Name:mk3f0192a4b7f3d3763c1a6bd15f21266a5e389c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:56:05.488729  369925 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:56:05.488751  369925 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:56:05.488835  369925 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:56:05.488858  369925 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:56:05.488863  369925 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:56:05.488873  369925 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:56:05.488884  369925 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:56:05.554343  369925 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:56:05.554398  369925 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:56:05.554444  369925 start.go:360] acquireMachinesLock for default-k8s-diff-port-451928: {Name:mkd4d5ce5590d094d470576746b410c1fbb05d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:56:05.554565  369925 start.go:364] duration metric: took 95.582µs to acquireMachinesLock for "default-k8s-diff-port-451928"
	I0916 11:56:05.554594  369925 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:56:05.554695  369925 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:56:05.556420  369925 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:56:05.556691  369925 start.go:159] libmachine.API.Create for "default-k8s-diff-port-451928" (driver="docker")
	I0916 11:56:05.556715  369925 client.go:168] LocalClient.Create starting
	I0916 11:56:05.556786  369925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:56:05.556820  369925 main.go:141] libmachine: Decoding PEM data...
	I0916 11:56:05.556841  369925 main.go:141] libmachine: Parsing certificate...
	I0916 11:56:05.556906  369925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:56:05.556940  369925 main.go:141] libmachine: Decoding PEM data...
	I0916 11:56:05.556954  369925 main.go:141] libmachine: Parsing certificate...
	I0916 11:56:05.557289  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:56:05.576970  369925 cli_runner.go:211] docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:56:05.577029  369925 network_create.go:284] running [docker network inspect default-k8s-diff-port-451928] to gather additional debugging logs...
	I0916 11:56:05.577045  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928
	W0916 11:56:05.594489  369925 cli_runner.go:211] docker network inspect default-k8s-diff-port-451928 returned with exit code 1
	I0916 11:56:05.594540  369925 network_create.go:287] error running [docker network inspect default-k8s-diff-port-451928]: docker network inspect default-k8s-diff-port-451928: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-451928 not found
	I0916 11:56:05.594557  369925 network_create.go:289] output of [docker network inspect default-k8s-diff-port-451928]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-451928 not found
	
	** /stderr **
	I0916 11:56:05.594675  369925 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:56:05.613180  369925 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:56:05.614533  369925 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:56:05.615818  369925 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:56:05.616767  369925 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:56:05.617852  369925 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:56:05.618825  369925 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:56:05.620128  369925 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023833e0}
	I0916 11:56:05.620159  369925 network_create.go:124] attempt to create docker network default-k8s-diff-port-451928 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:56:05.620217  369925 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 default-k8s-diff-port-451928
	I0916 11:56:05.688360  369925 network_create.go:108] docker network default-k8s-diff-port-451928 192.168.103.0/24 created
	I0916 11:56:05.688413  369925 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-451928" container
	I0916 11:56:05.688485  369925 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:56:05.707399  369925 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-451928 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:56:05.726926  369925 oci.go:103] Successfully created a docker volume default-k8s-diff-port-451928
	I0916 11:56:05.727024  369925 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-451928-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --entrypoint /usr/bin/test -v default-k8s-diff-port-451928:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:56:06.241480  369925 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-451928
	I0916 11:56:06.241517  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:06.241541  369925 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:56:06.241592  369925 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-451928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:56:10.727699  369925 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-451928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.486050673s)
	I0916 11:56:10.727728  369925 kic.go:203] duration metric: took 4.486185106s to extract preloaded images to volume ...
	W0916 11:56:10.727849  369925 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:56:10.727935  369925 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:56:10.777179  369925 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-451928 --name default-k8s-diff-port-451928 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --network default-k8s-diff-port-451928 --ip 192.168.103.2 --volume default-k8s-diff-port-451928:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:56:11.076240  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Running}}
	I0916 11:56:11.096679  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.115724  369925 cli_runner.go:164] Run: docker exec default-k8s-diff-port-451928 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:56:11.159179  369925 oci.go:144] the created container "default-k8s-diff-port-451928" has a running status.
	I0916 11:56:11.159223  369925 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa...
	I0916 11:56:11.413161  369925 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:56:11.438230  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.465506  369925 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:56:11.465530  369925 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-451928 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:56:11.515063  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.537179  369925 machine.go:93] provisionDockerMachine start ...
	I0916 11:56:11.537281  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.556335  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.556616  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.556640  369925 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:56:11.768876  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:56:11.768902  369925 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-451928"
	I0916 11:56:11.768966  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.788986  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.789249  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.789266  369925 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-451928 && echo "default-k8s-diff-port-451928" | sudo tee /etc/hostname
	I0916 11:56:11.936460  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:56:11.936559  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.954102  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.954288  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.954311  369925 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-451928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-451928/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-451928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:56:12.089644  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:56:12.089677  369925 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:56:12.089715  369925 ubuntu.go:177] setting up certificates
	I0916 11:56:12.089731  369925 provision.go:84] configureAuth start
	I0916 11:56:12.089783  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:12.106669  369925 provision.go:143] copyHostCerts
	I0916 11:56:12.106734  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:56:12.106742  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:56:12.106811  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:56:12.106897  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:56:12.106906  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:56:12.106929  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:56:12.106983  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:56:12.106989  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:56:12.107010  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:56:12.107105  369925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-451928 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-451928 localhost minikube]
	I0916 11:56:12.356779  369925 provision.go:177] copyRemoteCerts
	I0916 11:56:12.356846  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:56:12.356882  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.373979  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:12.474469  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:56:12.498244  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:56:12.520551  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:56:12.543988  369925 provision.go:87] duration metric: took 454.24102ms to configureAuth
	I0916 11:56:12.544015  369925 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:56:12.544171  369925 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:12.544262  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.562970  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:12.563218  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:12.563243  369925 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:56:12.788862  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:56:12.788899  369925 machine.go:96] duration metric: took 1.251694448s to provisionDockerMachine
	I0916 11:56:12.788917  369925 client.go:171] duration metric: took 7.23219201s to LocalClient.Create
	I0916 11:56:12.788941  369925 start.go:167] duration metric: took 7.232248271s to libmachine.API.Create "default-k8s-diff-port-451928"
	I0916 11:56:12.788953  369925 start.go:293] postStartSetup for "default-k8s-diff-port-451928" (driver="docker")
	I0916 11:56:12.788969  369925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:56:12.789043  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:56:12.789093  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.808336  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:12.906746  369925 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:56:12.909982  369925 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:56:12.910020  369925 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:56:12.910032  369925 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:56:12.910040  369925 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:56:12.910054  369925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:56:12.910120  369925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:56:12.910210  369925 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:56:12.910334  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:56:12.919277  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:56:12.943861  369925 start.go:296] duration metric: took 154.890441ms for postStartSetup
	I0916 11:56:12.944234  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:12.962378  369925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	I0916 11:56:12.962654  369925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:56:12.962705  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.979711  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.070255  369925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:56:13.074523  369925 start.go:128] duration metric: took 7.519810319s to createHost
	I0916 11:56:13.074564  369925 start.go:83] releasing machines lock for "default-k8s-diff-port-451928", held for 7.519971551s
	I0916 11:56:13.074634  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:13.092188  369925 ssh_runner.go:195] Run: cat /version.json
	I0916 11:56:13.092231  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:13.092286  369925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:56:13.092341  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:13.111088  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.111589  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.201269  369925 ssh_runner.go:195] Run: systemctl --version
	I0916 11:56:13.281779  369925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:56:13.422613  369925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:56:13.427455  369925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:56:13.446790  369925 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:56:13.446866  369925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:56:13.476675  369925 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:56:13.476703  369925 start.go:495] detecting cgroup driver to use...
	I0916 11:56:13.476733  369925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:56:13.476781  369925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:56:13.491098  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:56:13.501847  369925 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:56:13.501904  369925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:56:13.514875  369925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:56:13.528583  369925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:56:13.608336  369925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:56:13.692657  369925 docker.go:233] disabling docker service ...
	I0916 11:56:13.692728  369925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:56:13.711012  369925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:56:13.722637  369925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:56:13.804004  369925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:56:13.893600  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:56:13.904152  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:56:13.919897  369925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:56:13.919949  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.929206  369925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:56:13.929266  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.938651  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.947671  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.956988  369925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:56:13.965991  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.975358  369925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.990951  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:14.000471  369925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:56:14.008353  369925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:56:14.016281  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:14.094650  369925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:56:14.206633  369925 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:56:14.206706  369925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:56:14.210270  369925 start.go:563] Will wait 60s for crictl version
	I0916 11:56:14.210326  369925 ssh_runner.go:195] Run: which crictl
	I0916 11:56:14.214640  369925 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:56:14.248830  369925 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:56:14.248918  369925 ssh_runner.go:195] Run: crio --version
	I0916 11:56:14.286549  369925 ssh_runner.go:195] Run: crio --version
	I0916 11:56:14.323513  369925 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:56:14.324805  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:56:14.342953  369925 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:56:14.346765  369925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:56:14.357487  369925 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:56:14.357602  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:14.357649  369925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:56:14.419150  369925 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:56:14.419171  369925 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:56:14.419215  369925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:56:14.452381  369925 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:56:14.452404  369925 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:56:14.452411  369925 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.31.1 crio true true} ...
	I0916 11:56:14.452494  369925 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-451928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:56:14.452552  369925 ssh_runner.go:195] Run: crio config
	I0916 11:56:14.492446  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:14.492470  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:14.492478  369925 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:56:14.492498  369925 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-451928 NodeName:default-k8s-diff-port-451928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:56:14.492627  369925 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-451928"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:56:14.492684  369925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:56:14.500882  369925 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:56:14.500998  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:56:14.509117  369925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0916 11:56:14.527099  369925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:56:14.543245  369925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0916 11:56:14.559289  369925 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:56:14.562462  369925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:56:14.572764  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:14.652656  369925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:56:14.665329  369925 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928 for IP: 192.168.103.2
	I0916 11:56:14.665377  369925 certs.go:194] generating shared ca certs ...
	I0916 11:56:14.665401  369925 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.665550  369925 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:56:14.665587  369925 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:56:14.665596  369925 certs.go:256] generating profile certs ...
	I0916 11:56:14.665646  369925 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key
	I0916 11:56:14.665673  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt with IP's: []
	I0916 11:56:14.924148  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt ...
	I0916 11:56:14.924176  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: {Name:mk091e36192745584a10a0223d5da9c4774ead9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.924373  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key ...
	I0916 11:56:14.924390  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key: {Name:mkbdb702fd43b4403c626971aece787eeadc3f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.924500  369925 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28
	I0916 11:56:14.924525  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:56:15.219046  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 ...
	I0916 11:56:15.219072  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28: {Name:mkfe3b390ec90859e5a46e10bdce87c5dc6eb650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.219272  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28 ...
	I0916 11:56:15.219293  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28: {Name:mke646592535caf60542fd88ece7f067c10338a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.219400  369925 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt
	I0916 11:56:15.219505  369925 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key
	I0916 11:56:15.219595  369925 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key
	I0916 11:56:15.219625  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt with IP's: []
	I0916 11:56:15.383658  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt ...
	I0916 11:56:15.383690  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt: {Name:mkd552c3f0141c13b380fd54080a38ef06226dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.383896  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key ...
	I0916 11:56:15.383917  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key: {Name:mk90d5e6b30f7e493c69d8c0bc52df0016cace50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.384122  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:56:15.384172  369925 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:56:15.384188  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:56:15.384223  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:56:15.384256  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:56:15.384287  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:56:15.384343  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:56:15.385028  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:56:15.408492  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:56:15.430724  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:56:15.453586  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:56:15.475502  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:56:15.498275  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:56:15.520879  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:56:15.545503  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:56:15.569096  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:56:15.591066  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:56:15.613068  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:56:15.635329  369925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:56:15.652515  369925 ssh_runner.go:195] Run: openssl version
	I0916 11:56:15.657749  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:56:15.666602  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.669904  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.669960  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.676515  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:56:15.685194  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:56:15.693968  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.697218  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.697277  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.703984  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:56:15.712601  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:56:15.721031  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.724791  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.724850  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.731103  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:56:15.739950  369925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:56:15.742974  369925 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:56:15.743021  369925 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:15.743080  369925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:56:15.743140  369925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:56:15.777866  369925 cri.go:89] found id: ""
	I0916 11:56:15.777934  369925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:56:15.786860  369925 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:56:15.795173  369925 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:56:15.795238  369925 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:56:15.803379  369925 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:56:15.803402  369925 kubeadm.go:157] found existing configuration files:
	
	I0916 11:56:15.803504  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0916 11:56:15.811862  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:56:15.811917  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:56:15.820159  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0916 11:56:15.828222  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:56:15.828277  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:56:15.836121  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0916 11:56:15.844478  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:56:15.844543  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:56:15.852988  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0916 11:56:15.861492  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:56:15.861564  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:56:15.869244  369925 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:56:15.906985  369925 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:56:15.907060  369925 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:56:15.923558  369925 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:56:15.923620  369925 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:56:15.923661  369925 kubeadm.go:310] OS: Linux
	I0916 11:56:15.923700  369925 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:56:15.923757  369925 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:56:15.923839  369925 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:56:15.923893  369925 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:56:15.923967  369925 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:56:15.924033  369925 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:56:15.924118  369925 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:56:15.924201  369925 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:56:15.924284  369925 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:56:15.975434  369925 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:56:15.975560  369925 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:56:15.975752  369925 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:56:15.981635  369925 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:56:15.984105  369925 out.go:235]   - Generating certificates and keys ...
	I0916 11:56:15.984222  369925 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:56:15.984305  369925 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:56:16.250457  369925 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:56:16.375579  369925 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:56:16.472746  369925 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:56:16.569904  369925 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:56:16.903980  369925 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:56:16.904160  369925 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-451928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:56:17.174281  369925 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:56:17.174455  369925 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-451928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:56:17.398938  369925 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:56:17.545679  369925 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:56:17.695489  369925 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:56:17.695611  369925 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:56:17.882081  369925 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:56:17.956171  369925 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:56:18.164752  369925 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:56:18.357126  369925 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:56:18.577865  369925 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:56:18.578456  369925 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:56:18.580900  369925 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:56:18.584022  369925 out.go:235]   - Booting up control plane ...
	I0916 11:56:18.584135  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:56:18.584206  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:56:18.584263  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:56:18.592926  369925 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:56:18.598803  369925 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:56:18.598898  369925 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:56:18.686928  369925 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:56:18.687087  369925 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:56:19.188585  369925 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.688188ms
	I0916 11:56:19.188683  369925 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:56:23.689816  369925 kubeadm.go:310] [api-check] The API server is healthy after 4.501234881s
	I0916 11:56:23.700925  369925 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:56:23.712262  369925 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:56:23.731867  369925 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:56:23.732082  369925 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-451928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:56:23.740449  369925 kubeadm.go:310] [bootstrap-token] Using token: 1cwsrz.9f3rgqsuscyt2usy
	I0916 11:56:23.742197  369925 out.go:235]   - Configuring RBAC rules ...
	I0916 11:56:23.742343  369925 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:56:23.747376  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:56:23.753665  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:56:23.756482  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:56:23.759949  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:56:23.762767  369925 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:56:24.096235  369925 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:56:24.521719  369925 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:56:25.096879  369925 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:56:25.097870  369925 kubeadm.go:310] 
	I0916 11:56:25.097955  369925 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:56:25.097965  369925 kubeadm.go:310] 
	I0916 11:56:25.098061  369925 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:56:25.098070  369925 kubeadm.go:310] 
	I0916 11:56:25.098099  369925 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:56:25.098209  369925 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:56:25.098294  369925 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:56:25.098307  369925 kubeadm.go:310] 
	I0916 11:56:25.098376  369925 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:56:25.098389  369925 kubeadm.go:310] 
	I0916 11:56:25.098467  369925 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:56:25.098478  369925 kubeadm.go:310] 
	I0916 11:56:25.098550  369925 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:56:25.098650  369925 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:56:25.098758  369925 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:56:25.098776  369925 kubeadm.go:310] 
	I0916 11:56:25.098894  369925 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:56:25.099001  369925 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:56:25.099012  369925 kubeadm.go:310] 
	I0916 11:56:25.099131  369925 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1cwsrz.9f3rgqsuscyt2usy \
	I0916 11:56:25.099258  369925 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:56:25.099290  369925 kubeadm.go:310] 	--control-plane 
	I0916 11:56:25.099300  369925 kubeadm.go:310] 
	I0916 11:56:25.099403  369925 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:56:25.099431  369925 kubeadm.go:310] 
	I0916 11:56:25.099631  369925 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1cwsrz.9f3rgqsuscyt2usy \
	I0916 11:56:25.099812  369925 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:56:25.102791  369925 kubeadm.go:310] W0916 11:56:15.903810    1328 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:56:25.103142  369925 kubeadm.go:310] W0916 11:56:15.904614    1328 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:56:25.103423  369925 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:56:25.103527  369925 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:56:25.103557  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:25.103575  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:25.106291  369925 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:56:25.107572  369925 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:56:25.111391  369925 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:56:25.111412  369925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:56:25.128930  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:56:25.326058  369925 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:56:25.326137  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:25.326155  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-451928 minikube.k8s.io/updated_at=2024_09_16T11_56_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=default-k8s-diff-port-451928 minikube.k8s.io/primary=true
	I0916 11:56:25.334061  369925 ops.go:34] apiserver oom_adj: -16
	I0916 11:56:25.415582  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:25.916131  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:26.415875  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:26.916505  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:27.416652  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:27.915672  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.415630  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.916053  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.986004  369925 kubeadm.go:1113] duration metric: took 3.65993128s to wait for elevateKubeSystemPrivileges
	I0916 11:56:28.986059  369925 kubeadm.go:394] duration metric: took 13.24304259s to StartCluster
	I0916 11:56:28.986084  369925 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:28.986181  369925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:56:28.987987  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:28.988247  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:56:28.988275  369925 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:56:28.988246  369925 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:56:28.988353  369925 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-451928"
	I0916 11:56:28.988356  369925 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-451928"
	I0916 11:56:28.988371  369925 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-451928"
	I0916 11:56:28.988376  369925 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-451928"
	I0916 11:56:28.988398  369925 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:56:28.988460  369925 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:28.990224  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:28.990977  369925 out.go:177] * Verifying Kubernetes components...
	I0916 11:56:28.991103  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:28.992313  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:29.018007  369925 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-451928"
	I0916 11:56:29.018056  369925 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:56:29.018152  369925 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:56:29.018499  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:29.019540  369925 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:56:29.019561  369925 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:56:29.019604  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:29.045132  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:29.049324  369925 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:56:29.049370  369925 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:56:29.049429  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:29.068614  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:29.108027  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:56:29.210051  369925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:56:29.214324  369925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:56:29.318299  369925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:56:29.511298  369925 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:56:29.512950  369925 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-451928" to be "Ready" ...
	W0916 11:56:29.613687  369925 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-451928" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0916 11:56:29.613812  369925 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0916 11:56:29.905443  369925 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:56:29.907318  369925 addons.go:510] duration metric: took 919.042056ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:56:31.516192  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:33.516791  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:36.016869  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:38.516958  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:41.016196  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:41.516131  369925 node_ready.go:49] node "default-k8s-diff-port-451928" has status "Ready":"True"
	I0916 11:56:41.516158  369925 node_ready.go:38] duration metric: took 12.003155681s for node "default-k8s-diff-port-451928" to be "Ready" ...
	I0916 11:56:41.516169  369925 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:56:41.522890  369925 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.028991  369925 pod_ready.go:93] pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.029023  369925 pod_ready.go:82] duration metric: took 1.506107319s for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.029038  369925 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.034688  369925 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.034715  369925 pod_ready.go:82] duration metric: took 5.669153ms for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.034729  369925 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.039143  369925 pod_ready.go:93] pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.039165  369925 pod_ready.go:82] duration metric: took 4.428544ms for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.039177  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.043833  369925 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.043858  369925 pod_ready.go:82] duration metric: took 4.669057ms for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.043869  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.116405  369925 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.116427  369925 pod_ready.go:82] duration metric: took 72.552944ms for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.116438  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.516582  369925 pod_ready.go:93] pod "kube-proxy-g84zv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.516608  369925 pod_ready.go:82] duration metric: took 400.162448ms for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.516632  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.916727  369925 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.916750  369925 pod_ready.go:82] duration metric: took 400.110653ms for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.916762  369925 pod_ready.go:39] duration metric: took 2.400579164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:56:43.916774  369925 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:56:43.916822  369925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:56:43.928042  369925 api_server.go:72] duration metric: took 14.939651343s to wait for apiserver process to appear ...
	I0916 11:56:43.928069  369925 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:56:43.928094  369925 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0916 11:56:43.931965  369925 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0916 11:56:43.932933  369925 api_server.go:141] control plane version: v1.31.1
	I0916 11:56:43.932960  369925 api_server.go:131] duration metric: took 4.882393ms to wait for apiserver health ...
	I0916 11:56:43.932970  369925 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:56:44.120099  369925 system_pods.go:59] 9 kube-system pods found
	I0916 11:56:44.120130  369925 system_pods.go:61] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 11:56:44.120135  369925 system_pods.go:61] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 11:56:44.120138  369925 system_pods.go:61] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 11:56:44.120142  369925 system_pods.go:61] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 11:56:44.120146  369925 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 11:56:44.120149  369925 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 11:56:44.120153  369925 system_pods.go:61] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 11:56:44.120156  369925 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 11:56:44.120161  369925 system_pods.go:61] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 11:56:44.120168  369925 system_pods.go:74] duration metric: took 187.191857ms to wait for pod list to return data ...
	I0916 11:56:44.120175  369925 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:56:44.317310  369925 default_sa.go:45] found service account: "default"
	I0916 11:56:44.317348  369925 default_sa.go:55] duration metric: took 197.165786ms for default service account to be created ...
	I0916 11:56:44.317359  369925 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:56:44.519297  369925 system_pods.go:86] 9 kube-system pods found
	I0916 11:56:44.519330  369925 system_pods.go:89] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 11:56:44.519339  369925 system_pods.go:89] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 11:56:44.519344  369925 system_pods.go:89] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 11:56:44.519351  369925 system_pods.go:89] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 11:56:44.519356  369925 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 11:56:44.519362  369925 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 11:56:44.519369  369925 system_pods.go:89] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 11:56:44.519377  369925 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 11:56:44.519382  369925 system_pods.go:89] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 11:56:44.519391  369925 system_pods.go:126] duration metric: took 202.026143ms to wait for k8s-apps to be running ...
	I0916 11:56:44.519404  369925 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:56:44.519454  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:56:44.530991  369925 system_svc.go:56] duration metric: took 11.577254ms WaitForService to wait for kubelet
	I0916 11:56:44.531030  369925 kubeadm.go:582] duration metric: took 15.54264235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:56:44.531057  369925 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:56:44.717684  369925 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:56:44.717712  369925 node_conditions.go:123] node cpu capacity is 8
	I0916 11:56:44.717722  369925 node_conditions.go:105] duration metric: took 186.660851ms to run NodePressure ...
	I0916 11:56:44.717733  369925 start.go:241] waiting for startup goroutines ...
	I0916 11:56:44.717739  369925 start.go:246] waiting for cluster config update ...
	I0916 11:56:44.717749  369925 start.go:255] writing updated cluster config ...
	I0916 11:56:44.718049  369925 ssh_runner.go:195] Run: rm -f paused
	I0916 11:56:44.724825  369925 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-451928" cluster and "default" namespace by default
	E0916 11:56:44.725996  369925 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.622558388Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-tnm2s Namespace:kube-system ID:3f6d7320bc95f7e18391efc33e51215320dbbeeeb8a8f38842192646dcd50333 UID:1ea2318a-d454-406d-bb11-aa3e16dc2950 NetNS:/var/run/netns/b61f6ad8-2f49-4d44-9eb3-900131725eac Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.622722250Z" level=info msg="Checking pod kube-system_coredns-7c65d6cfc9-tnm2s for CNI network kindnet (type=ptp)"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.625917041Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7a288d42-63ec-4354-8113-56e9453ace39 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.626732708Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=02705cf5-fcff-4729-8c10-3d8979c9bdde name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.626957027Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=02705cf5-fcff-4729-8c10-3d8979c9bdde name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.629127213Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-c6qt9/coredns" id=4695c526-57e0-44df-b4c6-58a2d310fce8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.629172912Z" level=info msg="Ran pod sandbox 3f6d7320bc95f7e18391efc33e51215320dbbeeeb8a8f38842192646dcd50333 with infra container: kube-system/coredns-7c65d6cfc9-tnm2s/POD" id=f037ffdd-65f4-4af2-aa54-251c0aba1635 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.629229257Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.630215099Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=94e07138-2ca1-4dcb-bf8c-b3a6c191a567 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.630424760Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=94e07138-2ca1-4dcb-bf8c-b3a6c191a567 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.630953004Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=1ec1b014-1ef0-4b6e-8e11-cb9e6e365dda name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631129634Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=1ec1b014-1ef0-4b6e-8e11-cb9e6e365dda name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631387634Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/447f44726f03971435b98afa47474c7dd5b9992dfeb3ade9078289d4125787ad/merged/etc/passwd: no such file or directory"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631429875Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/447f44726f03971435b98afa47474c7dd5b9992dfeb3ade9078289d4125787ad/merged/etc/group: no such file or directory"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631733377Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-tnm2s/coredns" id=e54c9faa-287e-4fe9-9337-4d48efaf06fc name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631821742Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.710156036Z" level=info msg="Created container 08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a: kube-system/storage-provisioner/storage-provisioner" id=ae06edbb-7f1e-4bf1-a892-41bea84b1c62 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.710838661Z" level=info msg="Starting container: 08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a" id=3537d5b4-7fd1-4045-a722-bc2ada20016a name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.717793917Z" level=info msg="Started container" PID=2272 containerID=08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a description=kube-system/storage-provisioner/storage-provisioner id=3537d5b4-7fd1-4045-a722-bc2ada20016a name=/runtime.v1.RuntimeService/StartContainer sandboxID=76d2ac2b9d946657e773a66ffdfe9830c488cb4be1aa1b58bb27289cb5f0ad15
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.728172616Z" level=info msg="Created container 045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3: kube-system/coredns-7c65d6cfc9-c6qt9/coredns" id=4695c526-57e0-44df-b4c6-58a2d310fce8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.729064907Z" level=info msg="Starting container: 045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3" id=fae55f1f-4e38-402c-8a98-c6626a612f31 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.735728481Z" level=info msg="Started container" PID=2285 containerID=045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3 description=kube-system/coredns-7c65d6cfc9-c6qt9/coredns id=fae55f1f-4e38-402c-8a98-c6626a612f31 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c131646e0d50d73f2a3004247eaab734b98ca5366f46bc44b47bc034c0a2f35b
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.741598902Z" level=info msg="Created container 688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba: kube-system/coredns-7c65d6cfc9-tnm2s/coredns" id=e54c9faa-287e-4fe9-9337-4d48efaf06fc name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.793742567Z" level=info msg="Starting container: 688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba" id=2bf7c9d7-370d-4dff-b233-6c75543286c2 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.801265340Z" level=info msg="Started container" PID=2311 containerID=688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba description=kube-system/coredns-7c65d6cfc9-tnm2s/coredns id=2bf7c9d7-370d-4dff-b233-6c75543286c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f6d7320bc95f7e18391efc33e51215320dbbeeeb8a8f38842192646dcd50333
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	688086cd61e60       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   5 seconds ago       Running             coredns                   0                   3f6d7320bc95f       coredns-7c65d6cfc9-tnm2s
	045367a0c66bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   5 seconds ago       Running             coredns                   0                   c131646e0d50d       coredns-7c65d6cfc9-c6qt9
	08fa360282467       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 seconds ago       Running             storage-provisioner       0                   76d2ac2b9d946       storage-provisioner
	9d3593f5e16ca       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   17 seconds ago      Running             kindnet-cni               0                   2cfd58dc984bd       kindnet-rk7s2
	4ec4a11e3a24d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 seconds ago      Running             kube-proxy                0                   abb22584f1ba3       kube-proxy-g84zv
	7928e02dcad53       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   28 seconds ago      Running             kube-apiserver            0                   2567d54afee95       kube-apiserver-default-k8s-diff-port-451928
	8e9e71592f12e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   28 seconds ago      Running             kube-scheduler            0                   205f26e38ad59       kube-scheduler-default-k8s-diff-port-451928
	478b30866eae0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   28 seconds ago      Running             kube-controller-manager   0                   5c43721ed6e3b       kube-controller-manager-default-k8s-diff-port-451928
	245f21f94877c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   28 seconds ago      Running             etcd                      0                   9a7d0f2b97773       etcd-default-k8s-diff-port-451928
	
	
	==> coredns [045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47255 - 13176 "HINFO IN 2991928513979281550.1716499013040556013. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008117726s
	
	
	==> coredns [688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56972 - 15234 "HINFO IN 8713296587055300928.4817992167101797270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010563621s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-451928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-451928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-451928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_56_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-451928
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:56:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-451928
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 778d5e12087f47e2ae021c8dc368f974
	  System UUID:                96d27eb1-3e28-4d66-8a00-17bd26589e25
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-c6qt9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18s
	  kube-system                 coredns-7c65d6cfc9-tnm2s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18s
	  kube-system                 etcd-default-k8s-diff-port-451928                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         23s
	  kube-system                 kindnet-rk7s2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-451928             250m (3%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-451928    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-proxy-g84zv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-451928             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17s   kube-proxy       
	  Normal   Starting                 23s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 23s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  23s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19s   node-controller  Node default-k8s-diff-port-451928 event: Registered Node default-k8s-diff-port-451928 in Controller
	  Normal   NodeReady                6s    kubelet          Node default-k8s-diff-port-451928 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +1.027886] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000007] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +2.015855] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +4.223671] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000005] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +8.191398] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [245f21f94877cabfe24fc492e462f5cf8b616b6966f8967725e5ff7548bdc657] <==
	{"level":"info","ts":"2024-09-16T11:56:19.630617Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:56:19.630833Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:56:19.630859Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:56:19.631376Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:56:19.631433Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:56:19.916096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.917236Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.917944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:56:19.917968Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:56:19.918201Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:56:19.918225Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:56:19.918235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.917948Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-451928 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:56:19.918329Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.918361Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.919099Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:56:19.920323Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:56:19.921495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T11:56:19.921594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:56:47 up  1:39,  0 users,  load average: 2.30, 1.34, 1.01
	Linux default-k8s-diff-port-451928 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [9d3593f5e16ca1e3018cf675c2777bfccccb3325b4a618a4fc6f6dab6efde4ab] <==
	I0916 11:56:30.296501       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:56:30.296764       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0916 11:56:30.296916       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:56:30.296930       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:56:30.296951       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:56:30.694194       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:56:30.694222       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:56:30.694230       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:56:30.894328       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:56:30.894449       1 metrics.go:61] Registering metrics
	I0916 11:56:30.894522       1 controller.go:374] Syncing nftables rules
	I0916 11:56:40.698104       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:56:40.698169       1 main.go:299] handling current node
	
	
	==> kube-apiserver [7928e02dcad530c19c0b6ec7e01fbb3385f0324d1232f9672d14062a1addcfd3] <==
	I0916 11:56:21.806408       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:56:21.806414       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:56:21.806420       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:56:21.817011       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:56:21.823263       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:56:21.823291       1 policy_source.go:224] refreshing policies
	E0916 11:56:21.859718       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E0916 11:56:21.878300       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:56:21.907495       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:56:22.081916       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:56:22.710034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:56:22.713839       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:56:22.713860       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:56:23.191226       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:56:23.228428       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:56:23.312905       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:56:23.319069       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:56:23.320261       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:56:23.324351       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:56:23.738612       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:56:24.505110       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:56:24.520375       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:56:24.528400       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:56:29.401242       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:56:29.513882       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [478b30866eae01a91f51089d900b6295124848c3e35c0f765a4cbeb3bf0485fe] <==
	I0916 11:56:28.689382       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 11:56:28.689453       1 shared_informer.go:320] Caches are synced for deployment
	I0916 11:56:28.694790       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:56:28.699949       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:56:28.739103       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 11:56:29.110053       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:56:29.193577       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:56:29.193627       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:56:29.310514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:29.703476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="184.541777ms"
	I0916 11:56:29.710974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.439513ms"
	I0916 11:56:29.711090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.796µs"
	I0916 11:56:29.711219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.08µs"
	I0916 11:56:41.247397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:41.270424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:41.277033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.474µs"
	I0916 11:56:41.278381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.272µs"
	I0916 11:56:41.294102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.258µs"
	I0916 11:56:41.303925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.725µs"
	I0916 11:56:42.534787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.552µs"
	I0916 11:56:42.554508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.70186ms"
	I0916 11:56:42.554631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.387µs"
	I0916 11:56:42.572298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.233175ms"
	I0916 11:56:42.572396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.477µs"
	I0916 11:56:43.689750       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4ec4a11e3a24d5e1ce02dfd1183ec90b7b3781239d805a4d6ccf113375e15922] <==
	I0916 11:56:29.947995       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:56:30.046104       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:56:30.046167       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:56:30.064920       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:56:30.064979       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:56:30.067043       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:56:30.067493       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:56:30.067527       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:56:30.068845       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:56:30.069397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:56:30.069400       1 config.go:199] "Starting service config controller"
	I0916 11:56:30.069422       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:56:30.069563       1 config.go:328] "Starting node config controller"
	I0916 11:56:30.069629       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:56:30.169579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:56:30.169580       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:56:30.169853       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e9e71592f12e81a163e98e2f07e72e1f169a103a6aed393c95dee0e94c5cf50] <==
	W0916 11:56:21.814115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:56:21.814368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:21.812491       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:56:21.814396       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:56:21.814469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:56:21.814554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.687572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:56:22.687618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.748312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:56:22.748354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.798907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:56:22.798950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.852016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:56:22.852069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.912767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:56:22.912812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.917238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:56:22.917276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.971675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:56:22.971722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:23.005307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:56:23.005394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:23.091228       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:56:23.091278       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:56:25.811059       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595344    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e114aae-0ef0-40a3-96c6-f2bc67943f01-kube-proxy\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595415    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j8f\" (UniqueName: \"kubernetes.io/projected/9e114aae-0ef0-40a3-96c6-f2bc67943f01-kube-api-access-t9j8f\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595451    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e114aae-0ef0-40a3-96c6-f2bc67943f01-lib-modules\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.595476    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e114aae-0ef0-40a3-96c6-f2bc67943f01-xtables-lock\") pod \"kube-proxy-g84zv\" (UID: \"9e114aae-0ef0-40a3-96c6-f2bc67943f01\") " pod="kube-system/kube-proxy-g84zv"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.695876    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-cni-cfg\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.695943    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-lib-modules\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.696005    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-xtables-lock\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.696043    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzczw\" (UniqueName: \"kubernetes.io/projected/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-kube-api-access-tzczw\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.705478    1676 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:56:30 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:30.510135    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rk7s2" podStartSLOduration=1.510110896 podStartE2EDuration="1.510110896s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:30.509932137 +0000 UTC m=+6.216515412" watchObservedRunningTime="2024-09-16 11:56:30.510110896 +0000 UTC m=+6.216694175"
	Sep 16 11:56:30 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:30.519577    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g84zv" podStartSLOduration=1.519552813 podStartE2EDuration="1.519552813s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:30.519473363 +0000 UTC m=+6.226056639" watchObservedRunningTime="2024-09-16 11:56:30.519552813 +0000 UTC m=+6.226136092"
	Sep 16 11:56:34 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:34.430244    1676 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487794430057224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:34 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:34.430286    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487794430057224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.240163    1676 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375093    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrgm\" (UniqueName: \"kubernetes.io/projected/4e0063e4-a603-400c-acb8-094aed6b2941-kube-api-access-rfrgm\") pod \"coredns-7c65d6cfc9-c6qt9\" (UID: \"4e0063e4-a603-400c-acb8-094aed6b2941\") " pod="kube-system/coredns-7c65d6cfc9-c6qt9"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375143    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5mk\" (UniqueName: \"kubernetes.io/projected/3e5fdbb0-ecfb-490a-8314-e624e944b4b5-kube-api-access-cw5mk\") pod \"storage-provisioner\" (UID: \"3e5fdbb0-ecfb-490a-8314-e624e944b4b5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375194    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e0063e4-a603-400c-acb8-094aed6b2941-config-volume\") pod \"coredns-7c65d6cfc9-c6qt9\" (UID: \"4e0063e4-a603-400c-acb8-094aed6b2941\") " pod="kube-system/coredns-7c65d6cfc9-c6qt9"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375237    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e5fdbb0-ecfb-490a-8314-e624e944b4b5-tmp\") pod \"storage-provisioner\" (UID: \"3e5fdbb0-ecfb-490a-8314-e624e944b4b5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375269    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea2318a-d454-406d-bb11-aa3e16dc2950-config-volume\") pod \"coredns-7c65d6cfc9-tnm2s\" (UID: \"1ea2318a-d454-406d-bb11-aa3e16dc2950\") " pod="kube-system/coredns-7c65d6cfc9-tnm2s"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375285    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpfm\" (UniqueName: \"kubernetes.io/projected/1ea2318a-d454-406d-bb11-aa3e16dc2950-kube-api-access-qzpfm\") pod \"coredns-7c65d6cfc9-tnm2s\" (UID: \"1ea2318a-d454-406d-bb11-aa3e16dc2950\") " pod="kube-system/coredns-7c65d6cfc9-tnm2s"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.534781    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tnm2s" podStartSLOduration=13.534759159 podStartE2EDuration="13.534759159s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.534367604 +0000 UTC m=+18.240950903" watchObservedRunningTime="2024-09-16 11:56:42.534759159 +0000 UTC m=+18.241342440"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.574588    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.574561361 podStartE2EDuration="13.574561361s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.574522367 +0000 UTC m=+18.281105644" watchObservedRunningTime="2024-09-16 11:56:42.574561361 +0000 UTC m=+18.281144637"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.575038    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c6qt9" podStartSLOduration=13.575025761 podStartE2EDuration="13.575025761s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.561105849 +0000 UTC m=+18.267689125" watchObservedRunningTime="2024-09-16 11:56:42.575025761 +0000 UTC m=+18.281609035"
	Sep 16 11:56:44 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:44.431407    1676 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487804431221811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:44 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:44.431440    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487804431221811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a] <==
	I0916 11:56:41.733859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:56:41.743237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:56:41.743282       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:56:41.802642       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:56:41.802712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18fcca8c-b8bd-4cf6-b5f8-70b48585a383", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097 became leader
	I0916 11:56:41.802842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097!
	I0916 11:56:41.903549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (530.551µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (3.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-451928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-451928 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-451928 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (476.337µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-451928 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-451928
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-451928:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae",
	        "Created": "2024-09-16T11:56:10.793026862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 370642,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:56:10.911717057Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hosts",
	        "LogPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae-json.log",
	        "Name": "/default-k8s-diff-port-451928",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-451928:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-451928",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-451928",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-451928/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-451928",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c295616087e44bffb82a8e4e82399f08c9ad2a364df3b7343d36ba13396023a6",
	            "SandboxKey": "/var/run/docker/netns/c295616087e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-451928": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "22c51b08b0ca2daf580627f39cd71ae241a476b62a744a7a3bfd63c1aaadfdfe",
	                    "EndpointID": "576e0db3957872bf299445aa83a23070656403cdbf34945b607d06891920fd68",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-451928",
	                        "5e4edb1ce4fb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25: (1.146500032s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p enable-default-cni-838467                           | enable-default-cni-838467    | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-838467 pgrep                         | custom-flannel-838467        | jenkins | v1.34.0 | 16 Sep 24 11:41 UTC | 16 Sep 24 11:41 UTC |
	|         | -a kubelet                                             |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-179932             | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-179932                  | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-179932 image list                           | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:55 UTC | 16 Sep 24 11:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-451928  | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:56:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:56:05.303544  369925 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:56:05.303695  369925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:05.303707  369925 out.go:358] Setting ErrFile to fd 2...
	I0916 11:56:05.303713  369925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:05.304017  369925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:56:05.304835  369925 out.go:352] Setting JSON to false
	I0916 11:56:05.306135  369925 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5905,"bootTime":1726481860,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:56:05.306265  369925 start.go:139] virtualization: kvm guest
	I0916 11:56:05.308684  369925 out.go:177] * [default-k8s-diff-port-451928] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:56:05.310432  369925 notify.go:220] Checking for updates...
	I0916 11:56:05.310468  369925 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:56:05.311947  369925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:56:05.313397  369925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:56:05.315161  369925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:56:05.316694  369925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:56:05.318120  369925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:56:05.319958  369925 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320054  369925 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320136  369925 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:05.320218  369925 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:56:05.343305  369925 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:56:05.343431  369925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:05.398162  369925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:05.386767708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:05.398269  369925 docker.go:318] overlay module found
	I0916 11:56:05.401236  369925 out.go:177] * Using the docker driver based on user configuration
	I0916 11:56:05.402778  369925 start.go:297] selected driver: docker
	I0916 11:56:05.402792  369925 start.go:901] validating driver "docker" against <nil>
	I0916 11:56:05.402803  369925 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:56:05.403619  369925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:05.458917  369925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:05.449556012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:05.459101  369925 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:56:05.459345  369925 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:56:05.460940  369925 out.go:177] * Using Docker driver with root privileges
	I0916 11:56:05.462262  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:05.462314  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:05.462326  369925 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:56:05.462389  369925 start.go:340] cluster config:
	{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:05.463951  369925 out.go:177] * Starting "default-k8s-diff-port-451928" primary control-plane node in "default-k8s-diff-port-451928" cluster
	I0916 11:56:05.465195  369925 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:56:05.466528  369925 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:56:05.467567  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:05.467607  369925 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:56:05.467620  369925 cache.go:56] Caching tarball of preloaded images
	I0916 11:56:05.467678  369925 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:56:05.467704  369925 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:56:05.467737  369925 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:56:05.467838  369925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	I0916 11:56:05.467865  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json: {Name:mk3f0192a4b7f3d3763c1a6bd15f21266a5e389c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:56:05.488729  369925 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:56:05.488751  369925 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:56:05.488835  369925 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:56:05.488858  369925 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:56:05.488863  369925 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:56:05.488873  369925 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:56:05.488884  369925 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:56:05.554343  369925 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:56:05.554398  369925 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:56:05.554444  369925 start.go:360] acquireMachinesLock for default-k8s-diff-port-451928: {Name:mkd4d5ce5590d094d470576746b410c1fbb05d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:56:05.554565  369925 start.go:364] duration metric: took 95.582µs to acquireMachinesLock for "default-k8s-diff-port-451928"
	I0916 11:56:05.554594  369925 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:56:05.554695  369925 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:56:05.556420  369925 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:56:05.556691  369925 start.go:159] libmachine.API.Create for "default-k8s-diff-port-451928" (driver="docker")
	I0916 11:56:05.556715  369925 client.go:168] LocalClient.Create starting
	I0916 11:56:05.556786  369925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 11:56:05.556820  369925 main.go:141] libmachine: Decoding PEM data...
	I0916 11:56:05.556841  369925 main.go:141] libmachine: Parsing certificate...
	I0916 11:56:05.556906  369925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 11:56:05.556940  369925 main.go:141] libmachine: Decoding PEM data...
	I0916 11:56:05.556954  369925 main.go:141] libmachine: Parsing certificate...
	I0916 11:56:05.557289  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:56:05.576970  369925 cli_runner.go:211] docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:56:05.577029  369925 network_create.go:284] running [docker network inspect default-k8s-diff-port-451928] to gather additional debugging logs...
	I0916 11:56:05.577045  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928
	W0916 11:56:05.594489  369925 cli_runner.go:211] docker network inspect default-k8s-diff-port-451928 returned with exit code 1
	I0916 11:56:05.594540  369925 network_create.go:287] error running [docker network inspect default-k8s-diff-port-451928]: docker network inspect default-k8s-diff-port-451928: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-451928 not found
	I0916 11:56:05.594557  369925 network_create.go:289] output of [docker network inspect default-k8s-diff-port-451928]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-451928 not found
	
	** /stderr **
	I0916 11:56:05.594675  369925 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:56:05.613180  369925 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 11:56:05.614533  369925 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 11:56:05.615818  369925 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 11:56:05.616767  369925 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 11:56:05.617852  369925 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 11:56:05.618825  369925 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 11:56:05.620128  369925 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023833e0}
	I0916 11:56:05.620159  369925 network_create.go:124] attempt to create docker network default-k8s-diff-port-451928 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:56:05.620217  369925 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 default-k8s-diff-port-451928
	I0916 11:56:05.688360  369925 network_create.go:108] docker network default-k8s-diff-port-451928 192.168.103.0/24 created
	I0916 11:56:05.688413  369925 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-451928" container
	I0916 11:56:05.688485  369925 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:56:05.707399  369925 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-451928 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:56:05.726926  369925 oci.go:103] Successfully created a docker volume default-k8s-diff-port-451928
	I0916 11:56:05.727024  369925 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-451928-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --entrypoint /usr/bin/test -v default-k8s-diff-port-451928:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:56:06.241480  369925 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-451928
	I0916 11:56:06.241517  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:06.241541  369925 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:56:06.241592  369925 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-451928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:56:10.727699  369925 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-451928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.486050673s)
	I0916 11:56:10.727728  369925 kic.go:203] duration metric: took 4.486185106s to extract preloaded images to volume ...
	W0916 11:56:10.727849  369925 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:56:10.727935  369925 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:56:10.777179  369925 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-451928 --name default-k8s-diff-port-451928 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-451928 --network default-k8s-diff-port-451928 --ip 192.168.103.2 --volume default-k8s-diff-port-451928:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:56:11.076240  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Running}}
	I0916 11:56:11.096679  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.115724  369925 cli_runner.go:164] Run: docker exec default-k8s-diff-port-451928 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:56:11.159179  369925 oci.go:144] the created container "default-k8s-diff-port-451928" has a running status.
	I0916 11:56:11.159223  369925 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa...
	I0916 11:56:11.413161  369925 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:56:11.438230  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.465506  369925 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:56:11.465530  369925 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-451928 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:56:11.515063  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:11.537179  369925 machine.go:93] provisionDockerMachine start ...
	I0916 11:56:11.537281  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.556335  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.556616  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.556640  369925 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:56:11.768876  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:56:11.768902  369925 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-451928"
	I0916 11:56:11.768966  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.788986  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.789249  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.789266  369925 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-451928 && echo "default-k8s-diff-port-451928" | sudo tee /etc/hostname
	I0916 11:56:11.936460  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:56:11.936559  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:11.954102  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:11.954288  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:11.954311  369925 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-451928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-451928/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-451928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:56:12.089644  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:56:12.089677  369925 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:56:12.089715  369925 ubuntu.go:177] setting up certificates
	I0916 11:56:12.089731  369925 provision.go:84] configureAuth start
	I0916 11:56:12.089783  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:12.106669  369925 provision.go:143] copyHostCerts
	I0916 11:56:12.106734  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:56:12.106742  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:56:12.106811  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:56:12.106897  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:56:12.106906  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:56:12.106929  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:56:12.106983  369925 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:56:12.106989  369925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:56:12.107010  369925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:56:12.107105  369925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-451928 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-451928 localhost minikube]
	I0916 11:56:12.356779  369925 provision.go:177] copyRemoteCerts
	I0916 11:56:12.356846  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:56:12.356882  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.373979  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:12.474469  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:56:12.498244  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:56:12.520551  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:56:12.543988  369925 provision.go:87] duration metric: took 454.24102ms to configureAuth
	I0916 11:56:12.544015  369925 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:56:12.544171  369925 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:12.544262  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.562970  369925 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:12.563218  369925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:56:12.563243  369925 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:56:12.788862  369925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:56:12.788899  369925 machine.go:96] duration metric: took 1.251694448s to provisionDockerMachine
	I0916 11:56:12.788917  369925 client.go:171] duration metric: took 7.23219201s to LocalClient.Create
	I0916 11:56:12.788941  369925 start.go:167] duration metric: took 7.232248271s to libmachine.API.Create "default-k8s-diff-port-451928"
	I0916 11:56:12.788953  369925 start.go:293] postStartSetup for "default-k8s-diff-port-451928" (driver="docker")
	I0916 11:56:12.788969  369925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:56:12.789043  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:56:12.789093  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.808336  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:12.906746  369925 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:56:12.909982  369925 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:56:12.910020  369925 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:56:12.910032  369925 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:56:12.910040  369925 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:56:12.910054  369925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:56:12.910120  369925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:56:12.910210  369925 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:56:12.910334  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:56:12.919277  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:56:12.943861  369925 start.go:296] duration metric: took 154.890441ms for postStartSetup
	I0916 11:56:12.944234  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:12.962378  369925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	I0916 11:56:12.962654  369925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:56:12.962705  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:12.979711  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.070255  369925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:56:13.074523  369925 start.go:128] duration metric: took 7.519810319s to createHost
	I0916 11:56:13.074564  369925 start.go:83] releasing machines lock for "default-k8s-diff-port-451928", held for 7.519971551s
	I0916 11:56:13.074634  369925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:13.092188  369925 ssh_runner.go:195] Run: cat /version.json
	I0916 11:56:13.092231  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:13.092286  369925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:56:13.092341  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:13.111088  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.111589  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:13.201269  369925 ssh_runner.go:195] Run: systemctl --version
	I0916 11:56:13.281779  369925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:56:13.422613  369925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:56:13.427455  369925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:56:13.446790  369925 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:56:13.446866  369925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:56:13.476675  369925 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:56:13.476703  369925 start.go:495] detecting cgroup driver to use...
	I0916 11:56:13.476733  369925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:56:13.476781  369925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:56:13.491098  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:56:13.501847  369925 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:56:13.501904  369925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:56:13.514875  369925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:56:13.528583  369925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:56:13.608336  369925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:56:13.692657  369925 docker.go:233] disabling docker service ...
	I0916 11:56:13.692728  369925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:56:13.711012  369925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:56:13.722637  369925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:56:13.804004  369925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:56:13.893600  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:56:13.904152  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:56:13.919897  369925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:56:13.919949  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.929206  369925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:56:13.929266  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.938651  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.947671  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.956988  369925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:56:13.965991  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.975358  369925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:13.990951  369925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:56:14.000471  369925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:56:14.008353  369925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:56:14.016281  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:14.094650  369925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:56:14.206633  369925 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:56:14.206706  369925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:56:14.210270  369925 start.go:563] Will wait 60s for crictl version
	I0916 11:56:14.210326  369925 ssh_runner.go:195] Run: which crictl
	I0916 11:56:14.214640  369925 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:56:14.248830  369925 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:56:14.248918  369925 ssh_runner.go:195] Run: crio --version
	I0916 11:56:14.286549  369925 ssh_runner.go:195] Run: crio --version
	I0916 11:56:14.323513  369925 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:56:14.324805  369925 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:56:14.342953  369925 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:56:14.346765  369925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:56:14.357487  369925 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:56:14.357602  369925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:14.357649  369925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:56:14.419150  369925 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:56:14.419171  369925 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:56:14.419215  369925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:56:14.452381  369925 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:56:14.452404  369925 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:56:14.452411  369925 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.31.1 crio true true} ...
	I0916 11:56:14.452494  369925 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-451928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:56:14.452552  369925 ssh_runner.go:195] Run: crio config
	I0916 11:56:14.492446  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:14.492470  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:14.492478  369925 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:56:14.492498  369925 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-451928 NodeName:default-k8s-diff-port-451928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:56:14.492627  369925 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-451928"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:56:14.492684  369925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:56:14.500882  369925 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:56:14.500998  369925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:56:14.509117  369925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0916 11:56:14.527099  369925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:56:14.543245  369925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0916 11:56:14.559289  369925 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:56:14.562462  369925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:56:14.572764  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:14.652656  369925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:56:14.665329  369925 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928 for IP: 192.168.103.2
	I0916 11:56:14.665377  369925 certs.go:194] generating shared ca certs ...
	I0916 11:56:14.665401  369925 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.665550  369925 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:56:14.665587  369925 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:56:14.665596  369925 certs.go:256] generating profile certs ...
	I0916 11:56:14.665646  369925 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key
	I0916 11:56:14.665673  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt with IP's: []
	I0916 11:56:14.924148  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt ...
	I0916 11:56:14.924176  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: {Name:mk091e36192745584a10a0223d5da9c4774ead9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.924373  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key ...
	I0916 11:56:14.924390  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key: {Name:mkbdb702fd43b4403c626971aece787eeadc3f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:14.924500  369925 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28
	I0916 11:56:14.924525  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:56:15.219046  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 ...
	I0916 11:56:15.219072  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28: {Name:mkfe3b390ec90859e5a46e10bdce87c5dc6eb650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.219272  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28 ...
	I0916 11:56:15.219293  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28: {Name:mke646592535caf60542fd88ece7f067c10338a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.219400  369925 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt.b47f4f28 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt
	I0916 11:56:15.219505  369925 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28 -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key
	I0916 11:56:15.219595  369925 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key
	I0916 11:56:15.219625  369925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt with IP's: []
	I0916 11:56:15.383658  369925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt ...
	I0916 11:56:15.383690  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt: {Name:mkd552c3f0141c13b380fd54080a38ef06226dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.383896  369925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key ...
	I0916 11:56:15.383917  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key: {Name:mk90d5e6b30f7e493c69d8c0bc52df0016cace50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:15.384122  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:56:15.384172  369925 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:56:15.384188  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:56:15.384223  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:56:15.384256  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:56:15.384287  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:56:15.384343  369925 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:56:15.385028  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:56:15.408492  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:56:15.430724  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:56:15.453586  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:56:15.475502  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:56:15.498275  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:56:15.520879  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:56:15.545503  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:56:15.569096  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:56:15.591066  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:56:15.613068  369925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:56:15.635329  369925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:56:15.652515  369925 ssh_runner.go:195] Run: openssl version
	I0916 11:56:15.657749  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:56:15.666602  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.669904  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.669960  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:56:15.676515  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:56:15.685194  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:56:15.693968  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.697218  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.697277  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:56:15.703984  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:56:15.712601  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:56:15.721031  369925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.724791  369925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.724850  369925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:56:15.731103  369925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:56:15.739950  369925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:56:15.742974  369925 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:56:15.743021  369925 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:15.743080  369925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:56:15.743140  369925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:56:15.777866  369925 cri.go:89] found id: ""
	I0916 11:56:15.777934  369925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:56:15.786860  369925 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:56:15.795173  369925 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:56:15.795238  369925 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:56:15.803379  369925 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:56:15.803402  369925 kubeadm.go:157] found existing configuration files:
	
	I0916 11:56:15.803504  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0916 11:56:15.811862  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:56:15.811917  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:56:15.820159  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0916 11:56:15.828222  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:56:15.828277  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:56:15.836121  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0916 11:56:15.844478  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:56:15.844543  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:56:15.852988  369925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0916 11:56:15.861492  369925 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:56:15.861564  369925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:56:15.869244  369925 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:56:15.906985  369925 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:56:15.907060  369925 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:56:15.923558  369925 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:56:15.923620  369925 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:56:15.923661  369925 kubeadm.go:310] OS: Linux
	I0916 11:56:15.923700  369925 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:56:15.923757  369925 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:56:15.923839  369925 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:56:15.923893  369925 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:56:15.923967  369925 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:56:15.924033  369925 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:56:15.924118  369925 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:56:15.924201  369925 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:56:15.924284  369925 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:56:15.975434  369925 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:56:15.975560  369925 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:56:15.975752  369925 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:56:15.981635  369925 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:56:15.984105  369925 out.go:235]   - Generating certificates and keys ...
	I0916 11:56:15.984222  369925 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:56:15.984305  369925 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:56:16.250457  369925 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:56:16.375579  369925 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:56:16.472746  369925 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:56:16.569904  369925 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:56:16.903980  369925 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:56:16.904160  369925 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-451928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:56:17.174281  369925 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:56:17.174455  369925 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-451928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:56:17.398938  369925 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:56:17.545679  369925 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:56:17.695489  369925 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:56:17.695611  369925 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:56:17.882081  369925 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:56:17.956171  369925 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:56:18.164752  369925 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:56:18.357126  369925 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:56:18.577865  369925 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:56:18.578456  369925 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:56:18.580900  369925 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:56:18.584022  369925 out.go:235]   - Booting up control plane ...
	I0916 11:56:18.584135  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:56:18.584206  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:56:18.584263  369925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:56:18.592926  369925 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:56:18.598803  369925 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:56:18.598898  369925 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:56:18.686928  369925 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:56:18.687087  369925 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:56:19.188585  369925 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.688188ms
	I0916 11:56:19.188683  369925 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:56:23.689816  369925 kubeadm.go:310] [api-check] The API server is healthy after 4.501234881s
	I0916 11:56:23.700925  369925 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:56:23.712262  369925 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:56:23.731867  369925 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:56:23.732082  369925 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-451928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:56:23.740449  369925 kubeadm.go:310] [bootstrap-token] Using token: 1cwsrz.9f3rgqsuscyt2usy
	I0916 11:56:23.742197  369925 out.go:235]   - Configuring RBAC rules ...
	I0916 11:56:23.742343  369925 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:56:23.747376  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:56:23.753665  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:56:23.756482  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:56:23.759949  369925 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:56:23.762767  369925 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:56:24.096235  369925 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:56:24.521719  369925 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:56:25.096879  369925 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:56:25.097870  369925 kubeadm.go:310] 
	I0916 11:56:25.097955  369925 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:56:25.097965  369925 kubeadm.go:310] 
	I0916 11:56:25.098061  369925 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:56:25.098070  369925 kubeadm.go:310] 
	I0916 11:56:25.098099  369925 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:56:25.098209  369925 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:56:25.098294  369925 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:56:25.098307  369925 kubeadm.go:310] 
	I0916 11:56:25.098376  369925 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:56:25.098389  369925 kubeadm.go:310] 
	I0916 11:56:25.098467  369925 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:56:25.098478  369925 kubeadm.go:310] 
	I0916 11:56:25.098550  369925 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:56:25.098650  369925 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:56:25.098758  369925 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:56:25.098776  369925 kubeadm.go:310] 
	I0916 11:56:25.098894  369925 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:56:25.099001  369925 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:56:25.099012  369925 kubeadm.go:310] 
	I0916 11:56:25.099131  369925 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1cwsrz.9f3rgqsuscyt2usy \
	I0916 11:56:25.099258  369925 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 11:56:25.099290  369925 kubeadm.go:310] 	--control-plane 
	I0916 11:56:25.099300  369925 kubeadm.go:310] 
	I0916 11:56:25.099403  369925 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:56:25.099431  369925 kubeadm.go:310] 
	I0916 11:56:25.099631  369925 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1cwsrz.9f3rgqsuscyt2usy \
	I0916 11:56:25.099812  369925 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 11:56:25.102791  369925 kubeadm.go:310] W0916 11:56:15.903810    1328 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:56:25.103142  369925 kubeadm.go:310] W0916 11:56:15.904614    1328 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:56:25.103423  369925 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:56:25.103527  369925 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:56:25.103557  369925 cni.go:84] Creating CNI manager for ""
	I0916 11:56:25.103575  369925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:25.106291  369925 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:56:25.107572  369925 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:56:25.111391  369925 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:56:25.111412  369925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:56:25.128930  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:56:25.326058  369925 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:56:25.326137  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:25.326155  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-451928 minikube.k8s.io/updated_at=2024_09_16T11_56_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=default-k8s-diff-port-451928 minikube.k8s.io/primary=true
	I0916 11:56:25.334061  369925 ops.go:34] apiserver oom_adj: -16
	I0916 11:56:25.415582  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:25.916131  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:26.415875  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:26.916505  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:27.416652  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:27.915672  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.415630  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.916053  369925 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:56:28.986004  369925 kubeadm.go:1113] duration metric: took 3.65993128s to wait for elevateKubeSystemPrivileges
	I0916 11:56:28.986059  369925 kubeadm.go:394] duration metric: took 13.24304259s to StartCluster
	I0916 11:56:28.986084  369925 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:28.986181  369925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:56:28.987987  369925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:56:28.988247  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:56:28.988275  369925 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:56:28.988246  369925 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:56:28.988353  369925 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-451928"
	I0916 11:56:28.988356  369925 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-451928"
	I0916 11:56:28.988371  369925 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-451928"
	I0916 11:56:28.988376  369925 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-451928"
	I0916 11:56:28.988398  369925 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:56:28.988460  369925 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:28.990224  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:28.990977  369925 out.go:177] * Verifying Kubernetes components...
	I0916 11:56:28.991103  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:28.992313  369925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:56:29.018007  369925 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-451928"
	I0916 11:56:29.018056  369925 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:56:29.018152  369925 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:56:29.018499  369925 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:29.019540  369925 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:56:29.019561  369925 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:56:29.019604  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:29.045132  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:29.049324  369925 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:56:29.049370  369925 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:56:29.049429  369925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:29.068614  369925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:56:29.108027  369925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:56:29.210051  369925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:56:29.214324  369925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:56:29.318299  369925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:56:29.511298  369925 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:56:29.512950  369925 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-451928" to be "Ready" ...
	W0916 11:56:29.613687  369925 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-451928" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0916 11:56:29.613812  369925 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0916 11:56:29.905443  369925 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:56:29.907318  369925 addons.go:510] duration metric: took 919.042056ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:56:31.516192  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:33.516791  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:36.016869  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:38.516958  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:41.016196  369925 node_ready.go:53] node "default-k8s-diff-port-451928" has status "Ready":"False"
	I0916 11:56:41.516131  369925 node_ready.go:49] node "default-k8s-diff-port-451928" has status "Ready":"True"
	I0916 11:56:41.516158  369925 node_ready.go:38] duration metric: took 12.003155681s for node "default-k8s-diff-port-451928" to be "Ready" ...
	I0916 11:56:41.516169  369925 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:56:41.522890  369925 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.028991  369925 pod_ready.go:93] pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.029023  369925 pod_ready.go:82] duration metric: took 1.506107319s for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.029038  369925 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.034688  369925 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.034715  369925 pod_ready.go:82] duration metric: took 5.669153ms for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.034729  369925 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.039143  369925 pod_ready.go:93] pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.039165  369925 pod_ready.go:82] duration metric: took 4.428544ms for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.039177  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.043833  369925 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.043858  369925 pod_ready.go:82] duration metric: took 4.669057ms for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.043869  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.116405  369925 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.116427  369925 pod_ready.go:82] duration metric: took 72.552944ms for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.116438  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.516582  369925 pod_ready.go:93] pod "kube-proxy-g84zv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.516608  369925 pod_ready.go:82] duration metric: took 400.162448ms for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.516632  369925 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.916727  369925 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:56:43.916750  369925 pod_ready.go:82] duration metric: took 400.110653ms for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:56:43.916762  369925 pod_ready.go:39] duration metric: took 2.400579164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:56:43.916774  369925 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:56:43.916822  369925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:56:43.928042  369925 api_server.go:72] duration metric: took 14.939651343s to wait for apiserver process to appear ...
	I0916 11:56:43.928069  369925 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:56:43.928094  369925 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0916 11:56:43.931965  369925 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0916 11:56:43.932933  369925 api_server.go:141] control plane version: v1.31.1
	I0916 11:56:43.932960  369925 api_server.go:131] duration metric: took 4.882393ms to wait for apiserver health ...
	I0916 11:56:43.932970  369925 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:56:44.120099  369925 system_pods.go:59] 9 kube-system pods found
	I0916 11:56:44.120130  369925 system_pods.go:61] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 11:56:44.120135  369925 system_pods.go:61] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 11:56:44.120138  369925 system_pods.go:61] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 11:56:44.120142  369925 system_pods.go:61] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 11:56:44.120146  369925 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 11:56:44.120149  369925 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 11:56:44.120153  369925 system_pods.go:61] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 11:56:44.120156  369925 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 11:56:44.120161  369925 system_pods.go:61] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 11:56:44.120168  369925 system_pods.go:74] duration metric: took 187.191857ms to wait for pod list to return data ...
	I0916 11:56:44.120175  369925 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:56:44.317310  369925 default_sa.go:45] found service account: "default"
	I0916 11:56:44.317348  369925 default_sa.go:55] duration metric: took 197.165786ms for default service account to be created ...
	I0916 11:56:44.317359  369925 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:56:44.519297  369925 system_pods.go:86] 9 kube-system pods found
	I0916 11:56:44.519330  369925 system_pods.go:89] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 11:56:44.519339  369925 system_pods.go:89] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 11:56:44.519344  369925 system_pods.go:89] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 11:56:44.519351  369925 system_pods.go:89] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 11:56:44.519356  369925 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 11:56:44.519362  369925 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 11:56:44.519369  369925 system_pods.go:89] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 11:56:44.519377  369925 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 11:56:44.519382  369925 system_pods.go:89] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 11:56:44.519391  369925 system_pods.go:126] duration metric: took 202.026143ms to wait for k8s-apps to be running ...
	I0916 11:56:44.519404  369925 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:56:44.519454  369925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:56:44.530991  369925 system_svc.go:56] duration metric: took 11.577254ms WaitForService to wait for kubelet
	I0916 11:56:44.531030  369925 kubeadm.go:582] duration metric: took 15.54264235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:56:44.531057  369925 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:56:44.717684  369925 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:56:44.717712  369925 node_conditions.go:123] node cpu capacity is 8
	I0916 11:56:44.717722  369925 node_conditions.go:105] duration metric: took 186.660851ms to run NodePressure ...
	I0916 11:56:44.717733  369925 start.go:241] waiting for startup goroutines ...
	I0916 11:56:44.717739  369925 start.go:246] waiting for cluster config update ...
	I0916 11:56:44.717749  369925 start.go:255] writing updated cluster config ...
	I0916 11:56:44.718049  369925 ssh_runner.go:195] Run: rm -f paused
	I0916 11:56:44.724825  369925 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-451928" cluster and "default" namespace by default
	E0916 11:56:44.725996  369925 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631429875Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/447f44726f03971435b98afa47474c7dd5b9992dfeb3ade9078289d4125787ad/merged/etc/group: no such file or directory"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631733377Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-tnm2s/coredns" id=e54c9faa-287e-4fe9-9337-4d48efaf06fc name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.631821742Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.710156036Z" level=info msg="Created container 08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a: kube-system/storage-provisioner/storage-provisioner" id=ae06edbb-7f1e-4bf1-a892-41bea84b1c62 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.710838661Z" level=info msg="Starting container: 08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a" id=3537d5b4-7fd1-4045-a722-bc2ada20016a name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.717793917Z" level=info msg="Started container" PID=2272 containerID=08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a description=kube-system/storage-provisioner/storage-provisioner id=3537d5b4-7fd1-4045-a722-bc2ada20016a name=/runtime.v1.RuntimeService/StartContainer sandboxID=76d2ac2b9d946657e773a66ffdfe9830c488cb4be1aa1b58bb27289cb5f0ad15
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.728172616Z" level=info msg="Created container 045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3: kube-system/coredns-7c65d6cfc9-c6qt9/coredns" id=4695c526-57e0-44df-b4c6-58a2d310fce8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.729064907Z" level=info msg="Starting container: 045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3" id=fae55f1f-4e38-402c-8a98-c6626a612f31 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.735728481Z" level=info msg="Started container" PID=2285 containerID=045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3 description=kube-system/coredns-7c65d6cfc9-c6qt9/coredns id=fae55f1f-4e38-402c-8a98-c6626a612f31 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c131646e0d50d73f2a3004247eaab734b98ca5366f46bc44b47bc034c0a2f35b
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.741598902Z" level=info msg="Created container 688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba: kube-system/coredns-7c65d6cfc9-tnm2s/coredns" id=e54c9faa-287e-4fe9-9337-4d48efaf06fc name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.793742567Z" level=info msg="Starting container: 688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba" id=2bf7c9d7-370d-4dff-b233-6c75543286c2 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 11:56:41 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:41.801265340Z" level=info msg="Started container" PID=2311 containerID=688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba description=kube-system/coredns-7c65d6cfc9-tnm2s/coredns id=2bf7c9d7-370d-4dff-b233-6c75543286c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f6d7320bc95f7e18391efc33e51215320dbbeeeb8a8f38842192646dcd50333
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.378724155Z" level=info msg="Running pod sandbox: kube-system/metrics-server-6867b74b74-6v8cb/POD" id=c58bb70b-c2b6-42f6-a183-0ad140868f19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.378805416Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.393908083Z" level=info msg="Got pod network &{Name:metrics-server-6867b74b74-6v8cb Namespace:kube-system ID:ce54ddb354d1e4549bae1ff277f8ecabccfb2a8e731972e0a29a1f173f62b078 UID:5b81dba3-8443-4591-b969-a08337476107 NetNS:/var/run/netns/b4565ac0-55a1-43cd-9c8e-c0153ca56015 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.393960412Z" level=info msg="Adding pod kube-system_metrics-server-6867b74b74-6v8cb to CNI network \"kindnet\" (type=ptp)"
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.413820017Z" level=info msg="Got pod network &{Name:metrics-server-6867b74b74-6v8cb Namespace:kube-system ID:ce54ddb354d1e4549bae1ff277f8ecabccfb2a8e731972e0a29a1f173f62b078 UID:5b81dba3-8443-4591-b969-a08337476107 NetNS:/var/run/netns/b4565ac0-55a1-43cd-9c8e-c0153ca56015 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.413979762Z" level=info msg="Checking pod kube-system_metrics-server-6867b74b74-6v8cb for CNI network kindnet (type=ptp)"
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.416645657Z" level=info msg="Ran pod sandbox ce54ddb354d1e4549bae1ff277f8ecabccfb2a8e731972e0a29a1f173f62b078 with infra container: kube-system/metrics-server-6867b74b74-6v8cb/POD" id=c58bb70b-c2b6-42f6-a183-0ad140868f19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.417897761Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e8a4faef-9fdc-46c0-9f9c-74c423175122 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.418135363Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e8a4faef-9fdc-46c0-9f9c-74c423175122 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.419315319Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=c73ec089-dbdf-4457-af39-0e01fd6d05e1 name=/runtime.v1.ImageService/PullImage
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.455336128Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.541242386Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=963f6ecb-4702-48f8-a13a-9ba42af84d58 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 11:56:49 default-k8s-diff-port-451928 crio[1040]: time="2024-09-16 11:56:49.541584929Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=963f6ecb-4702-48f8-a13a-9ba42af84d58 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	688086cd61e60       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   0                   3f6d7320bc95f       coredns-7c65d6cfc9-tnm2s
	045367a0c66bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   0                   c131646e0d50d       coredns-7c65d6cfc9-c6qt9
	08fa360282467       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   8 seconds ago       Running             storage-provisioner       0                   76d2ac2b9d946       storage-provisioner
	9d3593f5e16ca       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   19 seconds ago      Running             kindnet-cni               0                   2cfd58dc984bd       kindnet-rk7s2
	4ec4a11e3a24d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   20 seconds ago      Running             kube-proxy                0                   abb22584f1ba3       kube-proxy-g84zv
	7928e02dcad53       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   30 seconds ago      Running             kube-apiserver            0                   2567d54afee95       kube-apiserver-default-k8s-diff-port-451928
	8e9e71592f12e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   30 seconds ago      Running             kube-scheduler            0                   205f26e38ad59       kube-scheduler-default-k8s-diff-port-451928
	478b30866eae0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   30 seconds ago      Running             kube-controller-manager   0                   5c43721ed6e3b       kube-controller-manager-default-k8s-diff-port-451928
	245f21f94877c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   30 seconds ago      Running             etcd                      0                   9a7d0f2b97773       etcd-default-k8s-diff-port-451928
	
	
	==> coredns [045367a0c66bb35b5bdc29ebde22b6662a27b1c2db5731425911f5c5d473e7a3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47255 - 13176 "HINFO IN 2991928513979281550.1716499013040556013. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008117726s
	
	
	==> coredns [688086cd61e602e539d517c7471412c1dffc0882938e43c43ff0d543e0f06aba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56972 - 15234 "HINFO IN 8713296587055300928.4817992167101797270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010563621s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-451928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-451928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-451928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_56_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-451928
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:56:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:56:41 +0000   Mon, 16 Sep 2024 11:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-451928
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 778d5e12087f47e2ae021c8dc368f974
	  System UUID:                96d27eb1-3e28-4d66-8a00-17bd26589e25
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-c6qt9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     21s
	  kube-system                 coredns-7c65d6cfc9-tnm2s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     21s
	  kube-system                 etcd-default-k8s-diff-port-451928                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-rk7s2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-451928             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-451928    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-g84zv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-451928             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 metrics-server-6867b74b74-6v8cb                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         1s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             490Mi (1%)   390Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 20s   kube-proxy       
	  Normal   Starting                 26s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 26s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  26s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26s   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22s   node-controller  Node default-k8s-diff-port-451928 event: Registered Node default-k8s-diff-port-451928 in Controller
	  Normal   NodeReady                9s    kubelet          Node default-k8s-diff-port-451928 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +1.027886] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000007] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +2.015855] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +4.223671] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000005] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000002] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +8.191398] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000006] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3318c5c795cb
	[  +0.000001] ll header: 00000000: 02 42 e5 53 5a 1d 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [245f21f94877cabfe24fc492e462f5cf8b616b6966f8967725e5ff7548bdc657] <==
	{"level":"info","ts":"2024-09-16T11:56:19.630617Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:56:19.630833Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:56:19.630859Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:56:19.631376Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:56:19.631433Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:56:19.916096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T11:56:19.916210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.916238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:56:19.917236Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.917944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:56:19.917968Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:56:19.918201Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:56:19.918225Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:56:19.918235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.917948Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-451928 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:56:19.918329Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.918361Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:56:19.919099Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:56:19.920323Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:56:19.921495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T11:56:19.921594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:56:50 up  1:39,  0 users,  load average: 2.30, 1.34, 1.01
	Linux default-k8s-diff-port-451928 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [9d3593f5e16ca1e3018cf675c2777bfccccb3325b4a618a4fc6f6dab6efde4ab] <==
	I0916 11:56:30.296501       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:56:30.296764       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0916 11:56:30.296916       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:56:30.296930       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:56:30.296951       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:56:30.694194       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:56:30.694222       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:56:30.694230       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:56:30.894328       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:56:30.894449       1 metrics.go:61] Registering metrics
	I0916 11:56:30.894522       1 controller.go:374] Syncing nftables rules
	I0916 11:56:40.698104       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:56:40.698169       1 main.go:299] handling current node
	
	
	==> kube-apiserver [7928e02dcad530c19c0b6ec7e01fbb3385f0324d1232f9672d14062a1addcfd3] <==
	E0916 11:56:49.054385       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:56:49.055531       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:56:49.125621       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.98.97.152"}
	W0916 11:56:49.132330       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:56:49.132391       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:56:49.136335       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:56:49.136387       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:56:50.049899       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:56:50.049925       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:56:50.049963       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:56:50.050013       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:56:50.051102       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:56:50.051143       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [478b30866eae01a91f51089d900b6295124848c3e35c0f765a4cbeb3bf0485fe] <==
	I0916 11:56:29.110053       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:56:29.193577       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:56:29.193627       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:56:29.310514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:29.703476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="184.541777ms"
	I0916 11:56:29.710974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.439513ms"
	I0916 11:56:29.711090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.796µs"
	I0916 11:56:29.711219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.08µs"
	I0916 11:56:41.247397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:41.270424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-451928"
	I0916 11:56:41.277033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.474µs"
	I0916 11:56:41.278381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.272µs"
	I0916 11:56:41.294102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.258µs"
	I0916 11:56:41.303925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.725µs"
	I0916 11:56:42.534787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.552µs"
	I0916 11:56:42.554508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.70186ms"
	I0916 11:56:42.554631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.387µs"
	I0916 11:56:42.572298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.233175ms"
	I0916 11:56:42.572396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.477µs"
	I0916 11:56:43.689750       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 11:56:49.079211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="14.317621ms"
	I0916 11:56:49.106348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="26.996533ms"
	I0916 11:56:49.106439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="55.149µs"
	I0916 11:56:49.106484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="25.84µs"
	I0916 11:56:49.552659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="65.202µs"
	
	
	==> kube-proxy [4ec4a11e3a24d5e1ce02dfd1183ec90b7b3781239d805a4d6ccf113375e15922] <==
	I0916 11:56:29.947995       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:56:30.046104       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:56:30.046167       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:56:30.064920       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:56:30.064979       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:56:30.067043       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:56:30.067493       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:56:30.067527       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:56:30.068845       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:56:30.069397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:56:30.069400       1 config.go:199] "Starting service config controller"
	I0916 11:56:30.069422       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:56:30.069563       1 config.go:328] "Starting node config controller"
	I0916 11:56:30.069629       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:56:30.169579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:56:30.169580       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:56:30.169853       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e9e71592f12e81a163e98e2f07e72e1f169a103a6aed393c95dee0e94c5cf50] <==
	W0916 11:56:21.814115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:56:21.814368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:21.812491       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:56:21.814396       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:56:21.814469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:56:21.814554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.687572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:56:22.687618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.748312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:56:22.748354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.798907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:56:22.798950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.852016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:56:22.852069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.912767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:56:22.912812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.917238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:56:22.917276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:22.971675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:56:22.971722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:23.005307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:56:23.005394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:56:23.091228       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:56:23.091278       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:56:25.811059       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.696043    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzczw\" (UniqueName: \"kubernetes.io/projected/9b5ccae0-58d8-475c-9c5a-dbb30e19f569-kube-api-access-tzczw\") pod \"kindnet-rk7s2\" (UID: \"9b5ccae0-58d8-475c-9c5a-dbb30e19f569\") " pod="kube-system/kindnet-rk7s2"
	Sep 16 11:56:29 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:29.705478    1676 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:56:30 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:30.510135    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rk7s2" podStartSLOduration=1.510110896 podStartE2EDuration="1.510110896s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:30.509932137 +0000 UTC m=+6.216515412" watchObservedRunningTime="2024-09-16 11:56:30.510110896 +0000 UTC m=+6.216694175"
	Sep 16 11:56:30 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:30.519577    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g84zv" podStartSLOduration=1.519552813 podStartE2EDuration="1.519552813s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:30.519473363 +0000 UTC m=+6.226056639" watchObservedRunningTime="2024-09-16 11:56:30.519552813 +0000 UTC m=+6.226136092"
	Sep 16 11:56:34 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:34.430244    1676 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487794430057224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:34 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:34.430286    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487794430057224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.240163    1676 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375093    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrgm\" (UniqueName: \"kubernetes.io/projected/4e0063e4-a603-400c-acb8-094aed6b2941-kube-api-access-rfrgm\") pod \"coredns-7c65d6cfc9-c6qt9\" (UID: \"4e0063e4-a603-400c-acb8-094aed6b2941\") " pod="kube-system/coredns-7c65d6cfc9-c6qt9"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375143    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5mk\" (UniqueName: \"kubernetes.io/projected/3e5fdbb0-ecfb-490a-8314-e624e944b4b5-kube-api-access-cw5mk\") pod \"storage-provisioner\" (UID: \"3e5fdbb0-ecfb-490a-8314-e624e944b4b5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375194    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e0063e4-a603-400c-acb8-094aed6b2941-config-volume\") pod \"coredns-7c65d6cfc9-c6qt9\" (UID: \"4e0063e4-a603-400c-acb8-094aed6b2941\") " pod="kube-system/coredns-7c65d6cfc9-c6qt9"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375237    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e5fdbb0-ecfb-490a-8314-e624e944b4b5-tmp\") pod \"storage-provisioner\" (UID: \"3e5fdbb0-ecfb-490a-8314-e624e944b4b5\") " pod="kube-system/storage-provisioner"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375269    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea2318a-d454-406d-bb11-aa3e16dc2950-config-volume\") pod \"coredns-7c65d6cfc9-tnm2s\" (UID: \"1ea2318a-d454-406d-bb11-aa3e16dc2950\") " pod="kube-system/coredns-7c65d6cfc9-tnm2s"
	Sep 16 11:56:41 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:41.375285    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpfm\" (UniqueName: \"kubernetes.io/projected/1ea2318a-d454-406d-bb11-aa3e16dc2950-kube-api-access-qzpfm\") pod \"coredns-7c65d6cfc9-tnm2s\" (UID: \"1ea2318a-d454-406d-bb11-aa3e16dc2950\") " pod="kube-system/coredns-7c65d6cfc9-tnm2s"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.534781    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tnm2s" podStartSLOduration=13.534759159 podStartE2EDuration="13.534759159s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.534367604 +0000 UTC m=+18.240950903" watchObservedRunningTime="2024-09-16 11:56:42.534759159 +0000 UTC m=+18.241342440"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.574588    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.574561361 podStartE2EDuration="13.574561361s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.574522367 +0000 UTC m=+18.281105644" watchObservedRunningTime="2024-09-16 11:56:42.574561361 +0000 UTC m=+18.281144637"
	Sep 16 11:56:42 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:42.575038    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c6qt9" podStartSLOduration=13.575025761 podStartE2EDuration="13.575025761s" podCreationTimestamp="2024-09-16 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:56:42.561105849 +0000 UTC m=+18.267689125" watchObservedRunningTime="2024-09-16 11:56:42.575025761 +0000 UTC m=+18.281609035"
	Sep 16 11:56:44 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:44.431407    1676 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487804431221811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:44 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:44.431440    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726487804431221811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:56:49 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:49.229313    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5b81dba3-8443-4591-b969-a08337476107-tmp-dir\") pod \"metrics-server-6867b74b74-6v8cb\" (UID: \"5b81dba3-8443-4591-b969-a08337476107\") " pod="kube-system/metrics-server-6867b74b74-6v8cb"
	Sep 16 11:56:49 default-k8s-diff-port-451928 kubelet[1676]: I0916 11:56:49.229431    1676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5x8b\" (UniqueName: \"kubernetes.io/projected/5b81dba3-8443-4591-b969-a08337476107-kube-api-access-h5x8b\") pod \"metrics-server-6867b74b74-6v8cb\" (UID: \"5b81dba3-8443-4591-b969-a08337476107\") " pod="kube-system/metrics-server-6867b74b74-6v8cb"
	Sep 16 11:56:49 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:49.489537    1676 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:56:49 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:49.489614    1676 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:56:49 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:49.489776    1676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5x8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-6v8cb_kube-system(5b81dba3-8443-4591-b969-a08337476107): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 16 11:56:49 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:49.490974    1676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-6v8cb" podUID="5b81dba3-8443-4591-b969-a08337476107"
	Sep 16 11:56:49 default-k8s-diff-port-451928 kubelet[1676]: E0916 11:56:49.541866    1676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6v8cb" podUID="5b81dba3-8443-4591-b969-a08337476107"
	
	
	==> storage-provisioner [08fa360282467442b82094f47a5f3c4014b8652ff6b4612c24a36abc57a0009a] <==
	I0916 11:56:41.733859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:56:41.743237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:56:41.743282       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:56:41.802642       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:56:41.802712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18fcca8c-b8bd-4cf6-b5f8-70b48585a383", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097 became leader
	I0916 11:56:41.802842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097!
	I0916 11:56:41.903549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_4947c811-89fe-4d2d-badd-cad066c3a097!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (507.234µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zqv8v" [264208fe-d84b-493b-aec2-9ef0c7ae7794] Running
E0916 12:01:27.812778   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004685983s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-451928 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-451928 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (536.466µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-451928 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-451928
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-451928:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae",
	        "Created": "2024-09-16T11:56:10.793026862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:56:57.413101414Z",
	            "FinishedAt": "2024-09-16T11:56:56.533548623Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/hosts",
	        "LogPath": "/var/lib/docker/containers/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae/5e4edb1ce4fb773e9b26d36e164c68b28168a64145a18c37e7d99d7c631e95ae-json.log",
	        "Name": "/default-k8s-diff-port-451928",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-451928:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-451928",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e831e4319d21ef67cdd8d41c095a455fffcda81ceda5489e66f1b8ab5c8c01fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-451928",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-451928/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-451928",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-451928",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "973a673a80b11b2e3612d29fcb209d9bb802b7de57bf5bedce5a861b473f7aec",
	            "SandboxKey": "/var/run/docker/netns/973a673a80b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-451928": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "22c51b08b0ca2daf580627f39cd71ae241a476b62a744a7a3bfd63c1aaadfdfe",
	                    "EndpointID": "64c91ba2d3ea59690a1dd76f33c19d69c9c9d83fa11dc4ffb050d1442ee9eed5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-451928",
	                        "5e4edb1ce4fb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-451928 logs -n 25: (1.304854142s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-406673        | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-406673             | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC | 16 Sep 24 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-406673 image                           | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p old-k8s-version-406673                              | old-k8s-version-406673       | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-946599 | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:50 UTC |
	|         | disable-driver-mounts-946599                           |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:50 UTC | 16 Sep 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-179932             | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-179932                  | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:51 UTC | 16 Sep 24 11:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-179932 image list                           | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:55 UTC | 16 Sep 24 11:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-451928  | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-451928       | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:56:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:56:57.021732  376177 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:56:57.022013  376177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:57.022022  376177 out.go:358] Setting ErrFile to fd 2...
	I0916 11:56:57.022027  376177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:56:57.022198  376177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:56:57.022760  376177 out.go:352] Setting JSON to false
	I0916 11:56:57.024152  376177 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5957,"bootTime":1726481860,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:56:57.024252  376177 start.go:139] virtualization: kvm guest
	I0916 11:56:57.026942  376177 out.go:177] * [default-k8s-diff-port-451928] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:56:57.028541  376177 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:56:57.028541  376177 notify.go:220] Checking for updates...
	I0916 11:56:57.031300  376177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:56:57.032713  376177 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:56:57.034203  376177 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:56:57.035606  376177 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:56:57.036995  376177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:56:57.038741  376177 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:56:57.039266  376177 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:56:57.064733  376177 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:56:57.064836  376177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:57.120468  376177 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:57.109093626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:57.120584  376177 docker.go:318] overlay module found
	I0916 11:56:57.123515  376177 out.go:177] * Using the docker driver based on existing profile
	I0916 11:56:57.124931  376177 start.go:297] selected driver: docker
	I0916 11:56:57.124945  376177 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:57.125044  376177 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:56:57.125987  376177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:56:57.181876  376177 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 11:56:57.171283975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:56:57.182189  376177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:56:57.182217  376177 cni.go:84] Creating CNI manager for ""
	I0916 11:56:57.182241  376177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:56:57.182274  376177 start.go:340] cluster config:
	{Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:56:57.184317  376177 out.go:177] * Starting "default-k8s-diff-port-451928" primary control-plane node in "default-k8s-diff-port-451928" cluster
	I0916 11:56:57.185654  376177 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 11:56:57.187110  376177 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:56:57.188661  376177 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:56:57.188711  376177 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:56:57.188721  376177 cache.go:56] Caching tarball of preloaded images
	I0916 11:56:57.188776  376177 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:56:57.188822  376177 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:56:57.188832  376177 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:56:57.188961  376177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	W0916 11:56:57.210335  376177 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:56:57.210358  376177 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:56:57.210439  376177 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:56:57.210461  376177 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:56:57.210467  376177 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:56:57.210475  376177 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:56:57.210483  376177 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:56:57.273482  376177 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:56:57.273532  376177 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:56:57.273576  376177 start.go:360] acquireMachinesLock for default-k8s-diff-port-451928: {Name:mkd4d5ce5590d094d470576746b410c1fbb05d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:56:57.273662  376177 start.go:364] duration metric: took 55.147µs to acquireMachinesLock for "default-k8s-diff-port-451928"
	I0916 11:56:57.273682  376177 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:56:57.273687  376177 fix.go:54] fixHost starting: 
	I0916 11:56:57.273892  376177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:57.293612  376177 fix.go:112] recreateIfNeeded on default-k8s-diff-port-451928: state=Stopped err=<nil>
	W0916 11:56:57.293678  376177 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:56:57.295889  376177 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-451928" ...
	I0916 11:56:57.297506  376177 cli_runner.go:164] Run: docker start default-k8s-diff-port-451928
	I0916 11:56:57.592897  376177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:56:57.612302  376177 kic.go:430] container "default-k8s-diff-port-451928" state is running.
	I0916 11:56:57.612666  376177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:56:57.631373  376177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/config.json ...
	I0916 11:56:57.631652  376177 machine.go:93] provisionDockerMachine start ...
	I0916 11:56:57.631726  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:56:57.650863  376177 main.go:141] libmachine: Using SSH client type: native
	I0916 11:56:57.651112  376177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0916 11:56:57.651129  376177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:56:57.651830  376177 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59914->127.0.0.1:33113: read: connection reset by peer
	I0916 11:57:00.788878  376177 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:57:00.788904  376177 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-451928"
	I0916 11:57:00.788971  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:00.809634  376177 main.go:141] libmachine: Using SSH client type: native
	I0916 11:57:00.809892  376177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0916 11:57:00.809917  376177 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-451928 && echo "default-k8s-diff-port-451928" | sudo tee /etc/hostname
	I0916 11:57:00.957987  376177 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-451928
	
	I0916 11:57:00.958081  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:00.977683  376177 main.go:141] libmachine: Using SSH client type: native
	I0916 11:57:00.977897  376177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0916 11:57:00.977920  376177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-451928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-451928/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-451928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:57:01.113726  376177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:57:01.113750  376177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 11:57:01.113795  376177 ubuntu.go:177] setting up certificates
	I0916 11:57:01.113807  376177 provision.go:84] configureAuth start
	I0916 11:57:01.113867  376177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:57:01.131570  376177 provision.go:143] copyHostCerts
	I0916 11:57:01.131642  376177 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 11:57:01.131654  376177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 11:57:01.131732  376177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 11:57:01.131847  376177 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 11:57:01.131859  376177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 11:57:01.131896  376177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 11:57:01.131976  376177 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 11:57:01.131985  376177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 11:57:01.132031  376177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 11:57:01.132099  376177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-451928 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-451928 localhost minikube]
	I0916 11:57:01.380885  376177 provision.go:177] copyRemoteCerts
	I0916 11:57:01.380966  376177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:57:01.381009  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:01.399056  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:01.498432  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:57:01.522617  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:57:01.545737  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:57:01.568879  376177 provision.go:87] duration metric: took 455.058398ms to configureAuth
	I0916 11:57:01.568908  376177 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:57:01.569089  376177 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:57:01.569197  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:01.588505  376177 main.go:141] libmachine: Using SSH client type: native
	I0916 11:57:01.588756  376177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0916 11:57:01.588774  376177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:57:01.904530  376177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:57:01.904564  376177 machine.go:96] duration metric: took 4.272892823s to provisionDockerMachine
	I0916 11:57:01.904581  376177 start.go:293] postStartSetup for "default-k8s-diff-port-451928" (driver="docker")
	I0916 11:57:01.904595  376177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:57:01.904702  376177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:57:01.904749  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:01.923829  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:02.022555  376177 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:57:02.026018  376177 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:57:02.026052  376177 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:57:02.026060  376177 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:57:02.026068  376177 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:57:02.026081  376177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 11:57:02.026144  376177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 11:57:02.026255  376177 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 11:57:02.026383  376177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:57:02.035355  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:57:02.057484  376177 start.go:296] duration metric: took 152.886619ms for postStartSetup
	I0916 11:57:02.057565  376177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:57:02.057613  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:02.074912  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:02.166426  376177 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:57:02.170658  376177 fix.go:56] duration metric: took 4.896966192s for fixHost
	I0916 11:57:02.170687  376177 start.go:83] releasing machines lock for "default-k8s-diff-port-451928", held for 4.897013078s
	I0916 11:57:02.170758  376177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-451928
	I0916 11:57:02.188334  376177 ssh_runner.go:195] Run: cat /version.json
	I0916 11:57:02.188386  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:02.188429  376177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:57:02.188487  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:02.206506  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:02.207575  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:02.377463  376177 ssh_runner.go:195] Run: systemctl --version
	I0916 11:57:02.381618  376177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:57:02.520574  376177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:57:02.525176  376177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:57:02.533717  376177 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:57:02.533776  376177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:57:02.542324  376177 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:57:02.542352  376177 start.go:495] detecting cgroup driver to use...
	I0916 11:57:02.542391  376177 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:57:02.542440  376177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:57:02.553910  376177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:57:02.565267  376177 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:57:02.565329  376177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:57:02.577258  376177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:57:02.587908  376177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:57:02.673419  376177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:57:02.753799  376177 docker.go:233] disabling docker service ...
	I0916 11:57:02.753875  376177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:57:02.765805  376177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:57:02.777659  376177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:57:02.854609  376177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:57:02.934234  376177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:57:02.945327  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:57:02.961944  376177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:57:02.962017  376177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:57:02.971583  376177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:57:02.971647  376177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:57:02.981292  376177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:57:02.991124  376177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:57:03.000878  376177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:57:03.010371  376177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:57:03.020227  376177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:57:03.030220  376177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:57:03.040386  376177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:57:03.049110  376177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:57:03.058348  376177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:57:03.141173  376177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:57:03.261050  376177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:57:03.261116  376177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:57:03.264948  376177 start.go:563] Will wait 60s for crictl version
	I0916 11:57:03.265017  376177 ssh_runner.go:195] Run: which crictl
	I0916 11:57:03.268415  376177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:57:03.303941  376177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 11:57:03.304040  376177 ssh_runner.go:195] Run: crio --version
	I0916 11:57:03.339624  376177 ssh_runner.go:195] Run: crio --version
	I0916 11:57:03.378475  376177 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 11:57:03.379948  376177 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-451928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:57:03.397936  376177 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:57:03.401912  376177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:57:03.413539  376177 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:57:03.413676  376177 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:57:03.413742  376177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:57:03.454472  376177 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:57:03.454498  376177 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:57:03.454548  376177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:57:03.487880  376177 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:57:03.487909  376177 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:57:03.487918  376177 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.31.1 crio true true} ...
	I0916 11:57:03.488040  376177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-451928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:57:03.488101  376177 ssh_runner.go:195] Run: crio config
	I0916 11:57:03.532734  376177 cni.go:84] Creating CNI manager for ""
	I0916 11:57:03.532756  376177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 11:57:03.532765  376177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:57:03.532783  376177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-451928 NodeName:default-k8s-diff-port-451928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:57:03.532965  376177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-451928"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:57:03.533026  376177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:57:03.543130  376177 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:57:03.543203  376177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:57:03.552038  376177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0916 11:57:03.569540  376177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:57:03.586806  376177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0916 11:57:03.604315  376177 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:57:03.607935  376177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:57:03.618637  376177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:57:03.693813  376177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:57:03.706856  376177 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928 for IP: 192.168.103.2
	I0916 11:57:03.706881  376177 certs.go:194] generating shared ca certs ...
	I0916 11:57:03.706902  376177 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:57:03.707053  376177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 11:57:03.707091  376177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 11:57:03.707100  376177 certs.go:256] generating profile certs ...
	I0916 11:57:03.707176  376177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.key
	I0916 11:57:03.707229  376177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key.b47f4f28
	I0916 11:57:03.707268  376177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key
	I0916 11:57:03.707367  376177 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 11:57:03.707395  376177 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 11:57:03.707403  376177 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:57:03.707425  376177 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:57:03.707451  376177 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:57:03.707471  376177 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 11:57:03.707509  376177 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 11:57:03.708056  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:57:03.733377  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:57:03.757414  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:57:03.807678  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:57:03.833358  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:57:03.856003  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:57:03.899420  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:57:03.922841  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:57:03.947043  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 11:57:03.969782  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:57:03.992150  376177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 11:57:04.015931  376177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:57:04.033730  376177 ssh_runner.go:195] Run: openssl version
	I0916 11:57:04.039393  376177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:57:04.048873  376177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:57:04.052153  376177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:57:04.052198  376177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:57:04.058406  376177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:57:04.066735  376177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 11:57:04.075678  376177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 11:57:04.078995  376177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 11:57:04.079048  376177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 11:57:04.085403  376177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 11:57:04.094571  376177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 11:57:04.104248  376177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 11:57:04.107923  376177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 11:57:04.107996  376177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 11:57:04.114438  376177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:57:04.122793  376177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:57:04.126293  376177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:57:04.132445  376177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:57:04.138783  376177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:57:04.145006  376177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:57:04.151148  376177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:57:04.157629  376177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:57:04.163883  376177 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-451928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-451928 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:57:04.163976  376177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:57:04.164017  376177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:57:04.197398  376177 cri.go:89] found id: ""
	I0916 11:57:04.197469  376177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:57:04.207000  376177 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:57:04.207021  376177 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:57:04.207069  376177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:57:04.215516  376177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:57:04.216244  376177 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-451928" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:57:04.216646  376177 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-451928" cluster setting kubeconfig missing "default-k8s-diff-port-451928" context setting]
	I0916 11:57:04.217231  376177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:57:04.218676  376177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:57:04.227168  376177 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 11:57:04.227199  376177 kubeadm.go:597] duration metric: took 20.172406ms to restartPrimaryControlPlane
	I0916 11:57:04.227208  376177 kubeadm.go:394] duration metric: took 63.349463ms to StartCluster
	I0916 11:57:04.227223  376177 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:57:04.227284  376177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:57:04.228512  376177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:57:04.228804  376177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:57:04.228867  376177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:57:04.228983  376177 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-451928"
	I0916 11:57:04.228993  376177 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-451928"
	I0916 11:57:04.229006  376177 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-451928"
	I0916 11:57:04.229012  376177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-451928"
	W0916 11:57:04.229016  376177 addons.go:243] addon dashboard should already be in state true
	I0916 11:57:04.229030  376177 config.go:182] Loaded profile config "default-k8s-diff-port-451928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:57:04.228982  376177 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-451928"
	I0916 11:57:04.229087  376177 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-451928"
	W0916 11:57:04.229107  376177 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:57:04.229133  376177 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:57:04.229024  376177 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-451928"
	I0916 11:57:04.229153  376177 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-451928"
	W0916 11:57:04.229162  376177 addons.go:243] addon metrics-server should already be in state true
	I0916 11:57:04.229182  376177 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:57:04.229047  376177 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:57:04.229404  376177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:57:04.229652  376177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:57:04.229735  376177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:57:04.229656  376177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:57:04.231441  376177 out.go:177] * Verifying Kubernetes components...
	I0916 11:57:04.232953  376177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:57:04.252853  376177 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-451928"
	W0916 11:57:04.252874  376177 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:57:04.252897  376177 host.go:66] Checking if "default-k8s-diff-port-451928" exists ...
	I0916 11:57:04.253188  376177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-451928 --format={{.State.Status}}
	I0916 11:57:04.256037  376177 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:57:04.256038  376177 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:57:04.257890  376177 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:57:04.257908  376177 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:57:04.258039  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:04.260269  376177 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:57:04.261531  376177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:57:04.261572  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:57:04.261591  376177 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:57:04.261650  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:04.262859  376177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:57:04.262882  376177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:57:04.262940  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:04.279950  376177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:57:04.279974  376177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:57:04.280033  376177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-451928
	I0916 11:57:04.291860  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:04.293020  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:04.293094  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:04.311789  376177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/default-k8s-diff-port-451928/id_rsa Username:docker}
	I0916 11:57:04.340768  376177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:57:04.408631  376177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-451928" to be "Ready" ...
	I0916 11:57:04.415617  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:57:04.415647  376177 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:57:04.418328  376177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:57:04.418353  376177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:57:04.494127  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:57:04.494157  376177 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:57:04.495388  376177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:57:04.495417  376177 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:57:04.518818  376177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:57:04.524879  376177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:57:04.593924  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:57:04.593955  376177 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:57:04.594873  376177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:57:04.594903  376177 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:57:04.700363  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:57:04.700388  376177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 11:57:04.702677  376177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:57:04.801785  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:57:04.801815  376177 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:57:04.908384  376177 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:57:04.908426  376177 retry.go:31] will retry after 263.938309ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 11:57:04.908481  376177 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:57:04.908493  376177 retry.go:31] will retry after 172.424814ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:57:04.910199  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:57:04.910226  376177 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:57:05.009402  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:57:05.009438  376177 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0916 11:57:05.081953  376177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:57:05.100104  376177 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:57:05.100140  376177 retry.go:31] will retry after 134.095611ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:57:05.101467  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:57:05.101542  376177 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:57:05.127042  376177 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:57:05.127128  376177 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:57:05.173389  376177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:57:05.218017  376177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:57:05.234435  376177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:57:07.397703  376177 node_ready.go:49] node "default-k8s-diff-port-451928" has status "Ready":"True"
	I0916 11:57:07.397735  376177 node_ready.go:38] duration metric: took 2.989064613s for node "default-k8s-diff-port-451928" to be "Ready" ...
	I0916 11:57:07.397748  376177 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:57:07.512673  376177 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.598612  376177 pod_ready.go:93] pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace has status "Ready":"True"
	I0916 11:57:07.598643  376177 pod_ready.go:82] duration metric: took 85.878313ms for pod "coredns-7c65d6cfc9-c6qt9" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.598657  376177 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.609844  376177 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace has status "Ready":"True"
	I0916 11:57:07.609878  376177 pod_ready.go:82] duration metric: took 11.211665ms for pod "coredns-7c65d6cfc9-tnm2s" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.609893  376177 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.615364  376177 pod_ready.go:93] pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:57:07.615393  376177 pod_ready.go:82] duration metric: took 5.491259ms for pod "etcd-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.615412  376177 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.620554  376177 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:57:07.620576  376177 pod_ready.go:82] duration metric: took 5.156146ms for pod "kube-apiserver-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.620588  376177 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.626167  376177 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:57:07.626189  376177 pod_ready.go:82] duration metric: took 5.59358ms for pod "kube-controller-manager-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:07.626200  376177 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:08.001946  376177 pod_ready.go:93] pod "kube-proxy-g84zv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:57:08.001969  376177 pod_ready.go:82] duration metric: took 375.762639ms for pod "kube-proxy-g84zv" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:08.001979  376177 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:08.401181  376177 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace has status "Ready":"True"
	I0916 11:57:08.401209  376177 pod_ready.go:82] duration metric: took 399.22203ms for pod "kube-scheduler-default-k8s-diff-port-451928" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:08.401224  376177 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace to be "Ready" ...
	I0916 11:57:08.819395  376177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.737401745s)
	I0916 11:57:08.819469  376177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.646020402s)
	I0916 11:57:08.999364  376177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.781298222s)
	I0916 11:57:08.999730  376177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.765240604s)
	I0916 11:57:08.999767  376177 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-451928"
	I0916 11:57:09.001642  376177 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-451928 addons enable metrics-server
	
	I0916 11:57:09.003205  376177 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0916 11:57:09.004583  376177 addons.go:510] duration metric: took 4.775710566s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0916 11:57:10.407720  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:12.907650  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:14.908102  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:17.408169  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:19.907778  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:21.907848  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:23.908006  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:26.406423  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:28.407610  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:30.907039  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:33.406725  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:35.407468  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:37.907097  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:39.946005  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:42.407338  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:44.906741  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:46.907460  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:49.406840  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:51.407551  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:53.907149  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:56.407531  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:57:58.906621  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:00.907853  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:03.407898  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:05.906839  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:08.406770  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:10.907176  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:13.407551  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:15.906979  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:17.907523  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:19.907587  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:22.407466  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:24.907415  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:27.407431  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:29.407861  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:31.905959  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:33.907184  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:36.407645  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:38.907695  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:41.407310  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:43.408179  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:45.408570  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:47.906406  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:49.907433  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:51.908185  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:54.406917  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:56.907282  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:58:59.406887  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:01.907956  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:04.406678  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:06.406749  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:08.407722  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:10.907089  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:13.407491  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:15.906816  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:18.407035  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:20.907362  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:23.407453  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:25.906750  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:28.407758  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:30.906888  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:33.407568  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:35.906702  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:37.907632  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:40.407201  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:42.407589  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:44.407879  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:46.906626  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:48.907350  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:51.407652  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:53.906452  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:55.906690  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 11:59:57.908921  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:00.407328  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:02.408104  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:04.907252  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:07.408054  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:09.906981  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:12.407865  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:14.907819  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:17.408084  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:19.908088  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:22.407081  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:24.906758  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:27.407395  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:29.907194  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:32.408211  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:34.906958  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:37.407908  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:39.907159  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:42.407448  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:44.906666  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:46.907768  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:49.407252  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:51.907398  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:54.407135  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:56.407638  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:00:58.906872  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:01:00.907387  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:01:02.907918  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:01:05.406777  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:01:07.407492  376177 pod_ready.go:103] pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace has status "Ready":"False"
	I0916 12:01:08.406927  376177 pod_ready.go:82] duration metric: took 4m0.005689607s for pod "metrics-server-6867b74b74-6v8cb" in "kube-system" namespace to be "Ready" ...
	E0916 12:01:08.406952  376177 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 12:01:08.406970  376177 pod_ready.go:39] duration metric: took 4m1.009209203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:01:08.406992  376177 api_server.go:52] waiting for apiserver process to appear ...
	I0916 12:01:08.407033  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 12:01:08.407097  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 12:01:08.444209  376177 cri.go:89] found id: "edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59"
	I0916 12:01:08.444230  376177 cri.go:89] found id: ""
	I0916 12:01:08.444238  376177 logs.go:276] 1 containers: [edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59]
	I0916 12:01:08.444284  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.447790  376177 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 12:01:08.447854  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 12:01:08.482390  376177 cri.go:89] found id: "96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80"
	I0916 12:01:08.482413  376177 cri.go:89] found id: ""
	I0916 12:01:08.482421  376177 logs.go:276] 1 containers: [96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80]
	I0916 12:01:08.482466  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.485954  376177 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 12:01:08.486025  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 12:01:08.523763  376177 cri.go:89] found id: "e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8"
	I0916 12:01:08.523794  376177 cri.go:89] found id: "de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2"
	I0916 12:01:08.523800  376177 cri.go:89] found id: ""
	I0916 12:01:08.523808  376177 logs.go:276] 2 containers: [e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8 de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2]
	I0916 12:01:08.523863  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.527345  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.530514  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 12:01:08.530613  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 12:01:08.565027  376177 cri.go:89] found id: "b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a"
	I0916 12:01:08.565048  376177 cri.go:89] found id: ""
	I0916 12:01:08.565056  376177 logs.go:276] 1 containers: [b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a]
	I0916 12:01:08.565110  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.568585  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 12:01:08.568657  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 12:01:08.601988  376177 cri.go:89] found id: "b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa"
	I0916 12:01:08.602013  376177 cri.go:89] found id: ""
	I0916 12:01:08.602023  376177 logs.go:276] 1 containers: [b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa]
	I0916 12:01:08.602082  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.605882  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 12:01:08.605960  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 12:01:08.641079  376177 cri.go:89] found id: "493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681"
	I0916 12:01:08.641106  376177 cri.go:89] found id: ""
	I0916 12:01:08.641116  376177 logs.go:276] 1 containers: [493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681]
	I0916 12:01:08.641172  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.644669  376177 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 12:01:08.644789  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 12:01:08.681755  376177 cri.go:89] found id: "32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5"
	I0916 12:01:08.681775  376177 cri.go:89] found id: ""
	I0916 12:01:08.681782  376177 logs.go:276] 1 containers: [32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5]
	I0916 12:01:08.681831  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.685307  376177 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 12:01:08.685379  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 12:01:08.719425  376177 cri.go:89] found id: "f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07"
	I0916 12:01:08.719448  376177 cri.go:89] found id: ""
	I0916 12:01:08.719457  376177 logs.go:276] 1 containers: [f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07]
	I0916 12:01:08.719523  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.723230  376177 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 12:01:08.723305  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 12:01:08.757641  376177 cri.go:89] found id: "f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486"
	I0916 12:01:08.757661  376177 cri.go:89] found id: "18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538"
	I0916 12:01:08.757665  376177 cri.go:89] found id: ""
	I0916 12:01:08.757672  376177 logs.go:276] 2 containers: [f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486 18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538]
	I0916 12:01:08.757715  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.761315  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:08.764816  376177 logs.go:123] Gathering logs for kube-apiserver [edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59] ...
	I0916 12:01:08.764845  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59"
	I0916 12:01:08.807987  376177 logs.go:123] Gathering logs for coredns [e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8] ...
	I0916 12:01:08.808019  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8"
	I0916 12:01:08.843472  376177 logs.go:123] Gathering logs for dmesg ...
	I0916 12:01:08.843508  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 12:01:08.867967  376177 logs.go:123] Gathering logs for describe nodes ...
	I0916 12:01:08.868005  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 12:01:08.966551  376177 logs.go:123] Gathering logs for etcd [96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80] ...
	I0916 12:01:08.966585  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80"
	I0916 12:01:09.005393  376177 logs.go:123] Gathering logs for kube-scheduler [b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a] ...
	I0916 12:01:09.005442  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a"
	I0916 12:01:09.051253  376177 logs.go:123] Gathering logs for kube-proxy [b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa] ...
	I0916 12:01:09.051293  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa"
	I0916 12:01:09.085733  376177 logs.go:123] Gathering logs for storage-provisioner [f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486] ...
	I0916 12:01:09.085767  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486"
	I0916 12:01:09.120083  376177 logs.go:123] Gathering logs for kindnet [32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5] ...
	I0916 12:01:09.120114  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5"
	I0916 12:01:09.158304  376177 logs.go:123] Gathering logs for CRI-O ...
	I0916 12:01:09.158332  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 12:01:09.224761  376177 logs.go:123] Gathering logs for kubelet ...
	I0916 12:01:09.224799  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 12:01:09.293597  376177 logs.go:123] Gathering logs for coredns [de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2] ...
	I0916 12:01:09.293636  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2"
	I0916 12:01:09.329702  376177 logs.go:123] Gathering logs for kube-controller-manager [493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681] ...
	I0916 12:01:09.329736  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681"
	I0916 12:01:09.381950  376177 logs.go:123] Gathering logs for kubernetes-dashboard [f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07] ...
	I0916 12:01:09.381984  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07"
	I0916 12:01:09.417610  376177 logs.go:123] Gathering logs for storage-provisioner [18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538] ...
	I0916 12:01:09.417644  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538"
	I0916 12:01:09.453625  376177 logs.go:123] Gathering logs for container status ...
	I0916 12:01:09.453668  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 12:01:11.993391  376177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 12:01:12.005192  376177 api_server.go:72] duration metric: took 4m7.776347839s to wait for apiserver process to appear ...
	I0916 12:01:12.005226  376177 api_server.go:88] waiting for apiserver healthz status ...
	I0916 12:01:12.005271  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 12:01:12.005322  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 12:01:12.042403  376177 cri.go:89] found id: "edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59"
	I0916 12:01:12.042432  376177 cri.go:89] found id: ""
	I0916 12:01:12.042450  376177 logs.go:276] 1 containers: [edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59]
	I0916 12:01:12.042505  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.046205  376177 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 12:01:12.046278  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 12:01:12.083674  376177 cri.go:89] found id: "96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80"
	I0916 12:01:12.083700  376177 cri.go:89] found id: ""
	I0916 12:01:12.083710  376177 logs.go:276] 1 containers: [96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80]
	I0916 12:01:12.083761  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.087441  376177 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 12:01:12.087513  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 12:01:12.123582  376177 cri.go:89] found id: "e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8"
	I0916 12:01:12.123604  376177 cri.go:89] found id: "de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2"
	I0916 12:01:12.123609  376177 cri.go:89] found id: ""
	I0916 12:01:12.123618  376177 logs.go:276] 2 containers: [e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8 de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2]
	I0916 12:01:12.123693  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.127785  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.131107  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 12:01:12.131187  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 12:01:12.167403  376177 cri.go:89] found id: "b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a"
	I0916 12:01:12.167431  376177 cri.go:89] found id: ""
	I0916 12:01:12.167441  376177 logs.go:276] 1 containers: [b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a]
	I0916 12:01:12.167483  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.171067  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 12:01:12.171129  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 12:01:12.205718  376177 cri.go:89] found id: "b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa"
	I0916 12:01:12.205740  376177 cri.go:89] found id: ""
	I0916 12:01:12.205752  376177 logs.go:276] 1 containers: [b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa]
	I0916 12:01:12.205796  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.209436  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 12:01:12.209496  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 12:01:12.244460  376177 cri.go:89] found id: "493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681"
	I0916 12:01:12.244485  376177 cri.go:89] found id: ""
	I0916 12:01:12.244496  376177 logs.go:276] 1 containers: [493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681]
	I0916 12:01:12.244553  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.248492  376177 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 12:01:12.248557  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 12:01:12.283345  376177 cri.go:89] found id: "32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5"
	I0916 12:01:12.283373  376177 cri.go:89] found id: ""
	I0916 12:01:12.283383  376177 logs.go:276] 1 containers: [32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5]
	I0916 12:01:12.283444  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.287082  376177 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 12:01:12.287156  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 12:01:12.321545  376177 cri.go:89] found id: "f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07"
	I0916 12:01:12.321571  376177 cri.go:89] found id: ""
	I0916 12:01:12.321581  376177 logs.go:276] 1 containers: [f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07]
	I0916 12:01:12.321641  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.325088  376177 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 12:01:12.325172  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 12:01:12.359328  376177 cri.go:89] found id: "f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486"
	I0916 12:01:12.359353  376177 cri.go:89] found id: "18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538"
	I0916 12:01:12.359359  376177 cri.go:89] found id: ""
	I0916 12:01:12.359368  376177 logs.go:276] 2 containers: [f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486 18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538]
	I0916 12:01:12.359432  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.363522  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:12.367182  376177 logs.go:123] Gathering logs for coredns [e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8] ...
	I0916 12:01:12.367209  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8"
	I0916 12:01:12.403385  376177 logs.go:123] Gathering logs for storage-provisioner [f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486] ...
	I0916 12:01:12.403421  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486"
	I0916 12:01:12.442069  376177 logs.go:123] Gathering logs for storage-provisioner [18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538] ...
	I0916 12:01:12.442107  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538"
	I0916 12:01:12.476779  376177 logs.go:123] Gathering logs for etcd [96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80] ...
	I0916 12:01:12.476809  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80"
	I0916 12:01:12.517385  376177 logs.go:123] Gathering logs for kube-scheduler [b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a] ...
	I0916 12:01:12.517426  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a"
	I0916 12:01:12.561443  376177 logs.go:123] Gathering logs for describe nodes ...
	I0916 12:01:12.561480  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 12:01:12.655621  376177 logs.go:123] Gathering logs for kube-apiserver [edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59] ...
	I0916 12:01:12.655653  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59"
	I0916 12:01:12.700166  376177 logs.go:123] Gathering logs for coredns [de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2] ...
	I0916 12:01:12.700202  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2"
	I0916 12:01:12.737533  376177 logs.go:123] Gathering logs for kube-proxy [b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa] ...
	I0916 12:01:12.737565  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa"
	I0916 12:01:12.774030  376177 logs.go:123] Gathering logs for container status ...
	I0916 12:01:12.774064  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 12:01:12.813683  376177 logs.go:123] Gathering logs for CRI-O ...
	I0916 12:01:12.813717  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 12:01:12.882999  376177 logs.go:123] Gathering logs for kubelet ...
	I0916 12:01:12.883038  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 12:01:12.952292  376177 logs.go:123] Gathering logs for dmesg ...
	I0916 12:01:12.952329  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 12:01:12.976952  376177 logs.go:123] Gathering logs for kube-controller-manager [493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681] ...
	I0916 12:01:12.976990  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681"
	I0916 12:01:13.031611  376177 logs.go:123] Gathering logs for kindnet [32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5] ...
	I0916 12:01:13.031659  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5"
	I0916 12:01:13.077989  376177 logs.go:123] Gathering logs for kubernetes-dashboard [f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07] ...
	I0916 12:01:13.078025  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07"
	I0916 12:01:15.621360  376177 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0916 12:01:15.626263  376177 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0916 12:01:15.627253  376177 api_server.go:141] control plane version: v1.31.1
	I0916 12:01:15.627279  376177 api_server.go:131] duration metric: took 3.622045868s to wait for apiserver health ...
	I0916 12:01:15.627287  376177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 12:01:15.627307  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 12:01:15.627352  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 12:01:15.663189  376177 cri.go:89] found id: "edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59"
	I0916 12:01:15.663218  376177 cri.go:89] found id: ""
	I0916 12:01:15.663228  376177 logs.go:276] 1 containers: [edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59]
	I0916 12:01:15.663281  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.666811  376177 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 12:01:15.666873  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 12:01:15.701355  376177 cri.go:89] found id: "96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80"
	I0916 12:01:15.701380  376177 cri.go:89] found id: ""
	I0916 12:01:15.701390  376177 logs.go:276] 1 containers: [96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80]
	I0916 12:01:15.701443  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.704940  376177 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 12:01:15.705002  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 12:01:15.739462  376177 cri.go:89] found id: "e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8"
	I0916 12:01:15.739498  376177 cri.go:89] found id: "de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2"
	I0916 12:01:15.739503  376177 cri.go:89] found id: ""
	I0916 12:01:15.739516  376177 logs.go:276] 2 containers: [e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8 de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2]
	I0916 12:01:15.739573  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.743174  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.747164  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 12:01:15.747235  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 12:01:15.782396  376177 cri.go:89] found id: "b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a"
	I0916 12:01:15.782422  376177 cri.go:89] found id: ""
	I0916 12:01:15.782429  376177 logs.go:276] 1 containers: [b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a]
	I0916 12:01:15.782477  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.786041  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 12:01:15.786108  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 12:01:15.822200  376177 cri.go:89] found id: "b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa"
	I0916 12:01:15.822220  376177 cri.go:89] found id: ""
	I0916 12:01:15.822227  376177 logs.go:276] 1 containers: [b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa]
	I0916 12:01:15.822284  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.826035  376177 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 12:01:15.826097  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 12:01:15.860182  376177 cri.go:89] found id: "493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681"
	I0916 12:01:15.860210  376177 cri.go:89] found id: ""
	I0916 12:01:15.860220  376177 logs.go:276] 1 containers: [493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681]
	I0916 12:01:15.860277  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.863922  376177 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 12:01:15.863996  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 12:01:15.898018  376177 cri.go:89] found id: "32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5"
	I0916 12:01:15.898045  376177 cri.go:89] found id: ""
	I0916 12:01:15.898055  376177 logs.go:276] 1 containers: [32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5]
	I0916 12:01:15.898121  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.902093  376177 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 12:01:15.902151  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 12:01:15.936733  376177 cri.go:89] found id: "f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486"
	I0916 12:01:15.936754  376177 cri.go:89] found id: "18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538"
	I0916 12:01:15.936759  376177 cri.go:89] found id: ""
	I0916 12:01:15.936766  376177 logs.go:276] 2 containers: [f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486 18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538]
	I0916 12:01:15.936811  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.940300  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.943914  376177 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 12:01:15.943969  376177 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 12:01:15.978550  376177 cri.go:89] found id: "f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07"
	I0916 12:01:15.978602  376177 cri.go:89] found id: ""
	I0916 12:01:15.978610  376177 logs.go:276] 1 containers: [f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07]
	I0916 12:01:15.978657  376177 ssh_runner.go:195] Run: which crictl
	I0916 12:01:15.982075  376177 logs.go:123] Gathering logs for describe nodes ...
	I0916 12:01:15.982101  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 12:01:16.074728  376177 logs.go:123] Gathering logs for kube-controller-manager [493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681] ...
	I0916 12:01:16.074758  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681"
	I0916 12:01:16.128690  376177 logs.go:123] Gathering logs for storage-provisioner [f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486] ...
	I0916 12:01:16.128723  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486"
	I0916 12:01:16.163273  376177 logs.go:123] Gathering logs for kubernetes-dashboard [f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07] ...
	I0916 12:01:16.163302  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07"
	I0916 12:01:16.198379  376177 logs.go:123] Gathering logs for kindnet [32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5] ...
	I0916 12:01:16.198409  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5"
	I0916 12:01:16.236184  376177 logs.go:123] Gathering logs for storage-provisioner [18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538] ...
	I0916 12:01:16.236215  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538"
	I0916 12:01:16.269557  376177 logs.go:123] Gathering logs for CRI-O ...
	I0916 12:01:16.269583  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 12:01:16.338590  376177 logs.go:123] Gathering logs for dmesg ...
	I0916 12:01:16.338635  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 12:01:16.363428  376177 logs.go:123] Gathering logs for etcd [96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80] ...
	I0916 12:01:16.363463  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80"
	I0916 12:01:16.402849  376177 logs.go:123] Gathering logs for coredns [e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8] ...
	I0916 12:01:16.402892  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8"
	I0916 12:01:16.439475  376177 logs.go:123] Gathering logs for coredns [de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2] ...
	I0916 12:01:16.439512  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2"
	I0916 12:01:16.476323  376177 logs.go:123] Gathering logs for kubelet ...
	I0916 12:01:16.476358  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 12:01:16.544853  376177 logs.go:123] Gathering logs for kube-apiserver [edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59] ...
	I0916 12:01:16.544893  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59"
	I0916 12:01:16.589544  376177 logs.go:123] Gathering logs for kube-scheduler [b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a] ...
	I0916 12:01:16.589578  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a"
	I0916 12:01:16.634868  376177 logs.go:123] Gathering logs for container status ...
	I0916 12:01:16.634902  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 12:01:16.675207  376177 logs.go:123] Gathering logs for kube-proxy [b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa] ...
	I0916 12:01:16.675239  376177 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa"
	I0916 12:01:19.222850  376177 system_pods.go:59] 10 kube-system pods found
	I0916 12:01:19.222905  376177 system_pods.go:61] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 12:01:19.222913  376177 system_pods.go:61] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 12:01:19.222917  376177 system_pods.go:61] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 12:01:19.222921  376177 system_pods.go:61] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 12:01:19.222924  376177 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 12:01:19.222928  376177 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 12:01:19.222932  376177 system_pods.go:61] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 12:01:19.222936  376177 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 12:01:19.222945  376177 system_pods.go:61] "metrics-server-6867b74b74-6v8cb" [5b81dba3-8443-4591-b969-a08337476107] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 12:01:19.222952  376177 system_pods.go:61] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 12:01:19.222961  376177 system_pods.go:74] duration metric: took 3.59566795s to wait for pod list to return data ...
	I0916 12:01:19.222971  376177 default_sa.go:34] waiting for default service account to be created ...
	I0916 12:01:19.225967  376177 default_sa.go:45] found service account: "default"
	I0916 12:01:19.225991  376177 default_sa.go:55] duration metric: took 3.010649ms for default service account to be created ...
	I0916 12:01:19.226000  376177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 12:01:19.231154  376177 system_pods.go:86] 10 kube-system pods found
	I0916 12:01:19.231181  376177 system_pods.go:89] "coredns-7c65d6cfc9-c6qt9" [4e0063e4-a603-400c-acb8-094aed6b2941] Running
	I0916 12:01:19.231187  376177 system_pods.go:89] "coredns-7c65d6cfc9-tnm2s" [1ea2318a-d454-406d-bb11-aa3e16dc2950] Running
	I0916 12:01:19.231191  376177 system_pods.go:89] "etcd-default-k8s-diff-port-451928" [1b71472f-f6fc-4a12-bbfc-0ee84a439f81] Running
	I0916 12:01:19.231196  376177 system_pods.go:89] "kindnet-rk7s2" [9b5ccae0-58d8-475c-9c5a-dbb30e19f569] Running
	I0916 12:01:19.231200  376177 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-451928" [f1bb7524-02b3-4ba9-9e22-e4993a8a10b1] Running
	I0916 12:01:19.231205  376177 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-451928" [89cefae9-3120-4eda-beea-28223e0ce7f0] Running
	I0916 12:01:19.231209  376177 system_pods.go:89] "kube-proxy-g84zv" [9e114aae-0ef0-40a3-96c6-f2bc67943f01] Running
	I0916 12:01:19.231213  376177 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-451928" [c53be62e-0975-4134-9769-7df0c6a05afb] Running
	I0916 12:01:19.231219  376177 system_pods.go:89] "metrics-server-6867b74b74-6v8cb" [5b81dba3-8443-4591-b969-a08337476107] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 12:01:19.231222  376177 system_pods.go:89] "storage-provisioner" [3e5fdbb0-ecfb-490a-8314-e624e944b4b5] Running
	I0916 12:01:19.231229  376177 system_pods.go:126] duration metric: took 5.22456ms to wait for k8s-apps to be running ...
	I0916 12:01:19.231236  376177 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 12:01:19.231278  376177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 12:01:19.242404  376177 system_svc.go:56] duration metric: took 11.15813ms WaitForService to wait for kubelet
	I0916 12:01:19.242432  376177 kubeadm.go:582] duration metric: took 4m15.013594166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:01:19.242453  376177 node_conditions.go:102] verifying NodePressure condition ...
	I0916 12:01:19.245652  376177 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 12:01:19.245682  376177 node_conditions.go:123] node cpu capacity is 8
	I0916 12:01:19.245696  376177 node_conditions.go:105] duration metric: took 3.236968ms to run NodePressure ...
	I0916 12:01:19.245711  376177 start.go:241] waiting for startup goroutines ...
	I0916 12:01:19.245721  376177 start.go:246] waiting for cluster config update ...
	I0916 12:01:19.245738  376177 start.go:255] writing updated cluster config ...
	I0916 12:01:19.246081  376177 ssh_runner.go:195] Run: rm -f paused
	I0916 12:01:19.252780  376177 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-451928" cluster and "default" namespace by default
	E0916 12:01:19.254349  376177 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 12:00:07 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:07.928364782Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 12:00:18 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:18.900580818Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1171a41e-b6a3-48b7-b2d1-f9358061af20 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:18 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:18.900870204Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1171a41e-b6a3-48b7-b2d1-f9358061af20 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.900282280Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8360f1bb-6744-45e6-9040-5c4b60e03df2 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.900558078Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8360f1bb-6744-45e6-9040-5c4b60e03df2 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.901280165Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=628cfd63-d401-4486-a2de-bd8aedcbeca9 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.901544620Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=628cfd63-d401-4486-a2de-bd8aedcbeca9 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.902263275Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs/dashboard-metrics-scraper" id=5d1c93cc-7a6f-401f-986c-be4f234e9896 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.902386365Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.953582927Z" level=info msg="Created container d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs/dashboard-metrics-scraper" id=5d1c93cc-7a6f-401f-986c-be4f234e9896 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.954223197Z" level=info msg="Starting container: d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af" id=ade482a3-6545-455b-86d2-17c6d43bea27 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:00:27 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:27.959954942Z" level=info msg="Started container" PID=2480 containerID=d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af description=kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs/dashboard-metrics-scraper id=ade482a3-6545-455b-86d2-17c6d43bea27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e660ff5d3c61b275f9d41c1834ed09d375763a145f7d81ce005d4237627ea130
	Sep 16 12:00:27 default-k8s-diff-port-451928 conmon[2468]: conmon d47da5c55f59b4eb38e7 <ninfo>: container 2480 exited with status 1
	Sep 16 12:00:28 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:28.398355107Z" level=info msg="Removing container: fd2b07295ac8c4b3b7bd077eb761f3a61450b783f5f2f1f9861f60e83d7eb499" id=20aa47f8-aaf1-4679-bc85-e636ecb43d4b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 12:00:28 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:28.411600943Z" level=info msg="Removed container fd2b07295ac8c4b3b7bd077eb761f3a61450b783f5f2f1f9861f60e83d7eb499: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs/dashboard-metrics-scraper" id=20aa47f8-aaf1-4679-bc85-e636ecb43d4b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 12:00:29 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:29.900414613Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=7e94f251-1cd3-46bf-a75e-758272f93c43 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:29 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:29.900767963Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=7e94f251-1cd3-46bf-a75e-758272f93c43 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:44 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:44.900083958Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8ef20891-c911-4b5c-8fb4-323039cbffc0 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:44 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:44.900287422Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8ef20891-c911-4b5c-8fb4-323039cbffc0 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:56 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:56.900111053Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a7c77922-66f2-4a46-b42e-44b478fd0a05 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:00:56 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:00:56.900407028Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a7c77922-66f2-4a46-b42e-44b478fd0a05 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:01:10 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:01:10.900098430Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d41fe66b-a6c4-4377-90c4-79d1c253b95a name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:01:10 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:01:10.900345064Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d41fe66b-a6c4-4377-90c4-79d1c253b95a name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:01:23 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:01:23.900487139Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1d3684e2-4a08-43ca-adec-f7ab5b454380 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:01:23 default-k8s-diff-port-451928 crio[682]: time="2024-09-16 12:01:23.900827130Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1d3684e2-4a08-43ca-adec-f7ab5b454380 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	d47da5c55f59b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           About a minute ago   Exited              dashboard-metrics-scraper   5                   e660ff5d3c61b       dashboard-metrics-scraper-7c96f5b85b-2dgcs
	f89283ecb2dbe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           3 minutes ago        Running             storage-provisioner         2                   18cc4f018b285       storage-provisioner
	f2d7e759a642c       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   4 minutes ago        Running             kubernetes-dashboard        0                   7c5877c89b834       kubernetes-dashboard-695b96c756-zqv8v
	e8580c9f11e76       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           4 minutes ago        Running             coredns                     1                   60776ee7f0383       coredns-7c65d6cfc9-c6qt9
	de9a8f5d63182       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           4 minutes ago        Running             coredns                     1                   9134592bd5f90       coredns-7c65d6cfc9-tnm2s
	18d275f68aa89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           4 minutes ago        Exited              storage-provisioner         1                   18cc4f018b285       storage-provisioner
	32b2e90eb4aa6       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                           4 minutes ago        Running             kindnet-cni                 1                   bd2e87f003406       kindnet-rk7s2
	b63a981f03904       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                           4 minutes ago        Running             kube-proxy                  1                   4c4f9a16e4f5f       kube-proxy-g84zv
	edb4888d8a011       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                           4 minutes ago        Running             kube-apiserver              1                   f117a9aaea222       kube-apiserver-default-k8s-diff-port-451928
	b3dc59d276a70       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                           4 minutes ago        Running             kube-scheduler              1                   b57efaec79a3e       kube-scheduler-default-k8s-diff-port-451928
	493f3bd594610       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                           4 minutes ago        Running             kube-controller-manager     1                   d9d0ca036b242       kube-controller-manager-default-k8s-diff-port-451928
	96ceff880cbf5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                           4 minutes ago        Running             etcd                        1                   7f2a81c7ec535       etcd-default-k8s-diff-port-451928
	
	
	==> coredns [de9a8f5d631828180cb801a4b96df801191b87a4e73192af7500b11273fb8ed2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55074 - 24534 "HINFO IN 2944044365132946491.7460352224504708993. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016015931s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1138697376]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:57:09.594) (total time: 30000ms):
	Trace[1138697376]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:57:39.595)
	Trace[1138697376]: [30.000963633s] [30.000963633s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[755240220]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:57:09.594) (total time: 30000ms):
	Trace[755240220]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:57:39.595)
	Trace[755240220]: [30.000798091s] [30.000798091s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1323568849]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:57:09.594) (total time: 30000ms):
	Trace[1323568849]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:57:39.595)
	Trace[1323568849]: [30.000915058s] [30.000915058s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [e8580c9f11e76b0ec049eeaf9a14a83e7aca802a75e562fe3d432c4775c4f3a8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41455 - 12757 "HINFO IN 7234783018970106260.7847831689011664434. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010906454s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1918560343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:57:09.530) (total time: 30001ms):
	Trace[1918560343]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:57:39.531)
	Trace[1918560343]: [30.00132936s] [30.00132936s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1953025356]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:57:09.593) (total time: 30000ms):
	Trace[1953025356]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:57:39.594)
	Trace[1953025356]: [30.000632399s] [30.000632399s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1985152228]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:57:09.593) (total time: 30000ms):
	Trace[1985152228]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:57:39.594)
	Trace[1985152228]: [30.000771955s] [30.000771955s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-451928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-451928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-451928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_56_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-451928
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 12:01:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:57:38 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:57:38 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:57:38 +0000   Mon, 16 Sep 2024 11:56:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:57:38 +0000   Mon, 16 Sep 2024 11:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-451928
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7cc8b3b691f479288b4f41a5c55f772
	  System UUID:                96d27eb1-3e28-4d66-8a00-17bd26589e25
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-c6qt9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m2s
	  kube-system                 coredns-7c65d6cfc9-tnm2s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m2s
	  kube-system                 etcd-default-k8s-diff-port-451928                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m7s
	  kube-system                 kindnet-rk7s2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m2s
	  kube-system                 kube-apiserver-default-k8s-diff-port-451928             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-451928    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-g84zv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-scheduler-default-k8s-diff-port-451928             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 metrics-server-6867b74b74-6v8cb                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m42s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-7c96f5b85b-2dgcs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-zqv8v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             490Mi (1%)   390Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m1s                   kube-proxy       
	  Normal   Starting                 4m21s                  kube-proxy       
	  Normal   NodeHasSufficientPID     5m7s                   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m7s                   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m7s                   kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m7s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m3s                   node-controller  Node default-k8s-diff-port-451928 event: Registered Node default-k8s-diff-port-451928 in Controller
	  Normal   NodeReady                4m50s                  kubelet          Node default-k8s-diff-port-451928 status is now: NodeReady
	  Normal   Starting                 4m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m28s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m27s (x8 over 4m28s)  kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m27s (x8 over 4m28s)  kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m27s (x7 over 4m28s)  kubelet          Node default-k8s-diff-port-451928 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m21s                  node-controller  Node default-k8s-diff-port-451928 event: Registered Node default-k8s-diff-port-451928 in Controller
	
	
	==> dmesg <==
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.954619] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.059994] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000007] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +6.207537] net_ratelimit: 5 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +8.191403] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.003944] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [96ceff880cbf5877392ff14289c79a89fd9888dee595ad92605292230fd01a80] <==
	{"level":"info","ts":"2024-09-16T11:57:05.006964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2024-09-16T11:57:05.007062Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2024-09-16T11:57:05.007192Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:57:05.007233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:57:05.008663Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:57:05.008793Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:57:05.009993Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T11:57:05.008950Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:57:05.008988Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:57:05.913870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:57:05.913920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:57:05.913973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T11:57:05.913992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:57:05.914000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2024-09-16T11:57:05.914013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:57:05.914027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2024-09-16T11:57:05.915201Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-451928 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:57:05.915237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:57:05.915251Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:57:05.915532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:57:05.915559Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:57:05.916412Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:57:05.916502Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:57:05.917670Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T11:57:05.917674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:01:31 up  1:43,  0 users,  load average: 0.42, 0.79, 0.86
	Linux default-k8s-diff-port-451928 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [32b2e90eb4aa693da3f508d88ac55e313738b58550b3df6ddae4afcb522aa3e5] <==
	I0916 11:59:30.025489       1 main.go:299] handling current node
	I0916 11:59:40.021530       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:59:40.021570       1 main.go:299] handling current node
	I0916 11:59:50.018501       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:59:50.018554       1 main.go:299] handling current node
	I0916 12:00:00.025436       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:00:00.025477       1 main.go:299] handling current node
	I0916 12:00:10.018523       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:00:10.018562       1 main.go:299] handling current node
	I0916 12:00:20.018416       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:00:20.018454       1 main.go:299] handling current node
	I0916 12:00:30.018970       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:00:30.019020       1 main.go:299] handling current node
	I0916 12:00:40.019696       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:00:40.019733       1 main.go:299] handling current node
	I0916 12:00:50.025505       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:00:50.025540       1 main.go:299] handling current node
	I0916 12:01:00.026636       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:01:00.026685       1 main.go:299] handling current node
	I0916 12:01:10.018341       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:01:10.018382       1 main.go:299] handling current node
	I0916 12:01:20.025442       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:01:20.025492       1 main.go:299] handling current node
	I0916 12:01:30.027527       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:01:30.027563       1 main.go:299] handling current node
	
	
	==> kube-apiserver [edb4888d8a01140e5fce39d75b345312b60babf1dc9d0dfc1158949680e6dc59] <==
	I0916 11:57:08.825437       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:57:08.931883       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.250.128"}
	I0916 11:57:08.948606       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.238.175"}
	I0916 11:57:10.873920       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:57:10.873976       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:57:11.126822       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:57:11.226732       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	W0916 11:58:08.600523       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:58:08.600533       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:58:08.600619       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:58:08.600618       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 11:58:08.601656       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:58:08.601652       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 12:00:08.602451       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 12:00:08.602522       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0916 12:00:08.602535       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 12:00:08.602608       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 12:00:08.603692       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 12:00:08.603728       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [493f3bd5946103ddd1d23b3b01e6b23440e7d087152f5a438517434199c07681] <==
	I0916 11:58:09.126350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="77.782µs"
	E0916 11:58:10.886484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:58:11.327352       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:58:16.142286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="94.713µs"
	I0916 11:58:22.910430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="82.174µs"
	E0916 11:58:40.891766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:58:41.334927       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:58:49.910832       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="66.46µs"
	I0916 11:58:53.217900       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="65.51µs"
	I0916 11:58:56.142296       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="61.894µs"
	I0916 11:59:03.912648       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="105.004µs"
	E0916 11:59:10.897306       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:59:11.342260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:59:40.903033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:59:41.350385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 12:00:10.908388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:00:11.357801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 12:00:18.910470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="138.5µs"
	I0916 12:00:28.410483       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="108.965µs"
	I0916 12:00:29.911649       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="147.657µs"
	I0916 12:00:36.142749       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="72.746µs"
	E0916 12:00:40.913487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:00:41.364633       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 12:01:10.919652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:01:11.371484       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b63a981f039042b123ccbac9233d8cdd38da117d1351814b6d58d2312003a9aa] <==
	I0916 11:57:09.597306       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:57:09.721300       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 11:57:09.721412       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:57:09.896172       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:57:09.896244       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:57:09.898531       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:57:09.898865       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:57:09.898901       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:57:09.899954       1 config.go:328] "Starting node config controller"
	I0916 11:57:09.899985       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:57:09.900188       1 config.go:199] "Starting service config controller"
	I0916 11:57:09.900799       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:57:09.900862       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:57:09.900914       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:57:10.000190       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:57:10.001471       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:57:10.001471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b3dc59d276a70e080f1439442087635bec94bdba137436453e67221ee40b647a] <==
	W0916 11:57:07.497015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.497065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0916 11:57:07.497172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.497215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0916 11:57:07.497326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0916 11:57:07.497415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError"
	W0916 11:57:07.502053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.502175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0916 11:57:07.502330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0916 11:57:07.502403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError"
	W0916 11:57:07.502575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.502641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0916 11:57:07.502857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.503069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0916 11:57:07.503315       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0916 11:57:07.503397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError"
	W0916 11:57:07.503555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0916 11:57:07.503620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	W0916 11:57:07.503891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.504086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0916 11:57:07.509392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.509443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0916 11:57:07.510153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0916 11:57:07.510232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	I0916 11:57:07.593949       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 12:00:33 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:33.852107     828 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488033851834676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:00:36 default-k8s-diff-port-451928 kubelet[828]: I0916 12:00:36.132427     828 scope.go:117] "RemoveContainer" containerID="d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af"
	Sep 16 12:00:36 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:36.132634     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-2dgcs_kubernetes-dashboard(1a2b3fef-115b-4870-9264-57d0f20efa13)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs" podUID="1a2b3fef-115b-4870-9264-57d0f20efa13"
	Sep 16 12:00:43 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:43.853884     828 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488043853641754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:00:43 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:43.853928     828 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488043853641754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:00:44 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:44.900561     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6v8cb" podUID="5b81dba3-8443-4591-b969-a08337476107"
	Sep 16 12:00:49 default-k8s-diff-port-451928 kubelet[828]: I0916 12:00:49.899930     828 scope.go:117] "RemoveContainer" containerID="d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af"
	Sep 16 12:00:49 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:49.900171     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-2dgcs_kubernetes-dashboard(1a2b3fef-115b-4870-9264-57d0f20efa13)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs" podUID="1a2b3fef-115b-4870-9264-57d0f20efa13"
	Sep 16 12:00:53 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:53.855098     828 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488053854899875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:00:53 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:53.855133     828 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488053854899875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:00:56 default-k8s-diff-port-451928 kubelet[828]: E0916 12:00:56.900729     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6v8cb" podUID="5b81dba3-8443-4591-b969-a08337476107"
	Sep 16 12:01:00 default-k8s-diff-port-451928 kubelet[828]: I0916 12:01:00.899673     828 scope.go:117] "RemoveContainer" containerID="d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af"
	Sep 16 12:01:00 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:00.899986     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-2dgcs_kubernetes-dashboard(1a2b3fef-115b-4870-9264-57d0f20efa13)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs" podUID="1a2b3fef-115b-4870-9264-57d0f20efa13"
	Sep 16 12:01:03 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:03.857414     828 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488063856400713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:01:03 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:03.857452     828 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488063856400713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:01:10 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:10.900645     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6v8cb" podUID="5b81dba3-8443-4591-b969-a08337476107"
	Sep 16 12:01:13 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:13.858460     828 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488073858287028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:01:13 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:13.858506     828 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488073858287028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:01:13 default-k8s-diff-port-451928 kubelet[828]: I0916 12:01:13.900029     828 scope.go:117] "RemoveContainer" containerID="d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af"
	Sep 16 12:01:13 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:13.900283     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-2dgcs_kubernetes-dashboard(1a2b3fef-115b-4870-9264-57d0f20efa13)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs" podUID="1a2b3fef-115b-4870-9264-57d0f20efa13"
	Sep 16 12:01:23 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:23.859574     828 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488083859347081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:01:23 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:23.859617     828 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488083859347081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:01:23 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:23.901068     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6v8cb" podUID="5b81dba3-8443-4591-b969-a08337476107"
	Sep 16 12:01:26 default-k8s-diff-port-451928 kubelet[828]: I0916 12:01:26.899833     828 scope.go:117] "RemoveContainer" containerID="d47da5c55f59b4eb38e791be901da088f21942555339db0cb9611244eb86c7af"
	Sep 16 12:01:26 default-k8s-diff-port-451928 kubelet[828]: E0916 12:01:26.900028     828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-2dgcs_kubernetes-dashboard(1a2b3fef-115b-4870-9264-57d0f20efa13)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-2dgcs" podUID="1a2b3fef-115b-4870-9264-57d0f20efa13"
	
	
	==> kubernetes-dashboard [f2d7e759a642cb77b14461682eea20f289069b3bc232a051d509e733a6dd3b07] <==
	2024/09/16 11:57:17 Using namespace: kubernetes-dashboard
	2024/09/16 11:57:17 Using in-cluster config to connect to apiserver
	2024/09/16 11:57:17 Using secret token for csrf signing
	2024/09/16 11:57:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:57:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:57:17 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 11:57:17 Generating JWE encryption key
	2024/09/16 11:57:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:57:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:57:17 Initializing JWE encryption key from synchronized object
	2024/09/16 11:57:17 Creating in-cluster Sidecar client
	2024/09/16 11:57:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:57:17 Serving insecurely on HTTP port: 9090
	2024/09/16 11:57:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:58:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:58:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:59:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:59:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:00:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:00:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:01:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:57:17 Starting overwatch
	
	
	==> storage-provisioner [18d275f68aa89dcb42c5217a343bc2ebc598ede639e869d8db6aae2346fcf538] <==
	I0916 11:57:09.426516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 11:57:39.501847       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f89283ecb2dbe271a5a3bcc1cf7eb217703bcd1ecedbcbca1cf10877740fc486] <==
	I0916 11:57:40.147751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:57:40.155271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:57:40.155315       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:57:57.551459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:57:57.551601       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18fcca8c-b8bd-4cf6-b5f8-70b48585a383", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-451928_db7eb017-3508-4e59-b8fd-c790bd7a603d became leader
	I0916 11:57:57.551685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_db7eb017-3508-4e59-b8fd-c790bd7a603d!
	I0916 11:57:57.652293       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-451928_db7eb017-3508-4e59-b8fd-c790bd7a603d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (590.306µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-451928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-132595 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-132595 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (679.655µs)
start_stop_delete_test.go:196: kubectl --context embed-certs-132595 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-132595
helpers_test.go:235: (dbg) docker inspect embed-certs-132595:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95",
	        "Created": "2024-09-16T12:02:27.844570227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393450,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T12:02:27.964272788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hosts",
	        "LogPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95-json.log",
	        "Name": "/embed-certs-132595",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-132595:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-132595",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-132595",
	                "Source": "/var/lib/docker/volumes/embed-certs-132595/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-132595",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-132595",
	                "name.minikube.sigs.k8s.io": "embed-certs-132595",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8051876631e629be3d63d04a25b08c24b1f81adc45f3ad239f7bc136e91b56ad",
	            "SandboxKey": "/var/run/docker/netns/8051876631e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-132595": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2bfc3c9091b0bc051827133f808c3cb85965e63d2bf1e9667fc1a6a160dc08f4",
	                    "EndpointID": "2e4a82502e88e3414290611bf291eaf399e6bd167c079853617718aca5cc9c76",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-132595",
	                        "9f079caa1423"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25: (1.118589124s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-451928  | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-451928       | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-451928                           | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-483277             | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-483277                  | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-483277 image list                           | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| start   | -p embed-certs-132595                                  | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 12:02:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 12:02:22.316707  392749 out.go:345] Setting OutFile to fd 1 ...
	I0916 12:02:22.316980  392749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:02:22.316990  392749 out.go:358] Setting ErrFile to fd 2...
	I0916 12:02:22.316994  392749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:02:22.317211  392749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 12:02:22.317988  392749 out.go:352] Setting JSON to false
	I0916 12:02:22.319189  392749 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6282,"bootTime":1726481860,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 12:02:22.319253  392749 start.go:139] virtualization: kvm guest
	I0916 12:02:22.321724  392749 out.go:177] * [embed-certs-132595] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 12:02:22.323580  392749 notify.go:220] Checking for updates...
	I0916 12:02:22.323619  392749 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 12:02:22.325184  392749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 12:02:22.326831  392749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:02:22.328293  392749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 12:02:22.329741  392749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 12:02:22.331375  392749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 12:02:22.333444  392749 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333594  392749 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333730  392749 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333861  392749 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 12:02:22.357827  392749 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 12:02:22.357973  392749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:02:22.415015  392749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:02:22.404189354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:02:22.415142  392749 docker.go:318] overlay module found
	I0916 12:02:22.418459  392749 out.go:177] * Using the docker driver based on user configuration
	I0916 12:02:22.420009  392749 start.go:297] selected driver: docker
	I0916 12:02:22.420030  392749 start.go:901] validating driver "docker" against <nil>
	I0916 12:02:22.420041  392749 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 12:02:22.420849  392749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:02:22.481968  392749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:02:22.472332251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:02:22.482174  392749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 12:02:22.482464  392749 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:02:22.484723  392749 out.go:177] * Using Docker driver with root privileges
	I0916 12:02:22.486426  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:22.486474  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:22.486482  392749 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 12:02:22.486556  392749 start.go:340] cluster config:
	{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:02:22.488572  392749 out.go:177] * Starting "embed-certs-132595" primary control-plane node in "embed-certs-132595" cluster
	I0916 12:02:22.490260  392749 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 12:02:22.492012  392749 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 12:02:22.493615  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:22.493670  392749 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 12:02:22.493684  392749 cache.go:56] Caching tarball of preloaded images
	I0916 12:02:22.493725  392749 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 12:02:22.493780  392749 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 12:02:22.493797  392749 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 12:02:22.493914  392749 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	I0916 12:02:22.493936  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json: {Name:mk85e2df12eb3418e581ab1558bdddacab4821d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 12:02:22.516611  392749 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 12:02:22.516634  392749 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 12:02:22.516701  392749 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 12:02:22.516717  392749 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 12:02:22.516721  392749 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 12:02:22.516728  392749 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 12:02:22.516735  392749 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 12:02:22.577454  392749 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 12:02:22.577503  392749 cache.go:194] Successfully downloaded all kic artifacts
	I0916 12:02:22.577543  392749 start.go:360] acquireMachinesLock for embed-certs-132595: {Name:mk90285717afa09eeba6eb1eaf13ca243fd0e8ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:02:22.577688  392749 start.go:364] duration metric: took 123.446µs to acquireMachinesLock for "embed-certs-132595"
	I0916 12:02:22.577716  392749 start.go:93] Provisioning new machine with config: &{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:02:22.577790  392749 start.go:125] createHost starting for "" (driver="docker")
	I0916 12:02:22.580825  392749 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 12:02:22.581158  392749 start.go:159] libmachine.API.Create for "embed-certs-132595" (driver="docker")
	I0916 12:02:22.581194  392749 client.go:168] LocalClient.Create starting
	I0916 12:02:22.581279  392749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 12:02:22.581315  392749 main.go:141] libmachine: Decoding PEM data...
	I0916 12:02:22.581364  392749 main.go:141] libmachine: Parsing certificate...
	I0916 12:02:22.581424  392749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 12:02:22.581453  392749 main.go:141] libmachine: Decoding PEM data...
	I0916 12:02:22.581469  392749 main.go:141] libmachine: Parsing certificate...
	I0916 12:02:22.581917  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 12:02:22.601058  392749 cli_runner.go:211] docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 12:02:22.601120  392749 network_create.go:284] running [docker network inspect embed-certs-132595] to gather additional debugging logs...
	I0916 12:02:22.601136  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595
	W0916 12:02:22.619588  392749 cli_runner.go:211] docker network inspect embed-certs-132595 returned with exit code 1
	I0916 12:02:22.619629  392749 network_create.go:287] error running [docker network inspect embed-certs-132595]: docker network inspect embed-certs-132595: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-132595 not found
	I0916 12:02:22.619641  392749 network_create.go:289] output of [docker network inspect embed-certs-132595]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-132595 not found
	
	** /stderr **
	I0916 12:02:22.619744  392749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 12:02:22.638437  392749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 12:02:22.639338  392749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 12:02:22.640220  392749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 12:02:22.641011  392749 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 12:02:22.641944  392749 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 12:02:22.642797  392749 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 12:02:22.643883  392749 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023ed510}
	I0916 12:02:22.643904  392749 network_create.go:124] attempt to create docker network embed-certs-132595 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 12:02:22.643965  392749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-132595 embed-certs-132595
	I0916 12:02:22.717370  392749 network_create.go:108] docker network embed-certs-132595 192.168.103.0/24 created
	I0916 12:02:22.717419  392749 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-132595" container
	I0916 12:02:22.717475  392749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 12:02:22.739425  392749 cli_runner.go:164] Run: docker volume create embed-certs-132595 --label name.minikube.sigs.k8s.io=embed-certs-132595 --label created_by.minikube.sigs.k8s.io=true
	I0916 12:02:22.758826  392749 oci.go:103] Successfully created a docker volume embed-certs-132595
	I0916 12:02:22.758921  392749 cli_runner.go:164] Run: docker run --rm --name embed-certs-132595-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-132595 --entrypoint /usr/bin/test -v embed-certs-132595:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 12:02:23.286517  392749 oci.go:107] Successfully prepared a docker volume embed-certs-132595
	I0916 12:02:23.286582  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:23.286608  392749 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 12:02:23.286686  392749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-132595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 12:02:27.777252  392749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-132595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490517682s)
	I0916 12:02:27.777293  392749 kic.go:203] duration metric: took 4.490683033s to extract preloaded images to volume ...
	W0916 12:02:27.777479  392749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 12:02:27.777606  392749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 12:02:27.828245  392749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-132595 --name embed-certs-132595 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-132595 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-132595 --network embed-certs-132595 --ip 192.168.103.2 --volume embed-certs-132595:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 12:02:28.129271  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Running}}
	I0916 12:02:28.148758  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.168574  392749 cli_runner.go:164] Run: docker exec embed-certs-132595 stat /var/lib/dpkg/alternatives/iptables
	I0916 12:02:28.214356  392749 oci.go:144] the created container "embed-certs-132595" has a running status.
	I0916 12:02:28.214398  392749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa...
	I0916 12:02:28.579373  392749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 12:02:28.600739  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.623045  392749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 12:02:28.623068  392749 kic_runner.go:114] Args: [docker exec --privileged embed-certs-132595 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 12:02:28.687280  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.707892  392749 machine.go:93] provisionDockerMachine start ...
	I0916 12:02:28.707978  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:28.730282  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:28.730549  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:28.730566  392749 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 12:02:28.864997  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:02:28.865036  392749 ubuntu.go:169] provisioning hostname "embed-certs-132595"
	I0916 12:02:28.865105  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:28.884140  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:28.884312  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:28.884326  392749 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-132595 && echo "embed-certs-132595" | sudo tee /etc/hostname
	I0916 12:02:29.033007  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:02:29.033095  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.051460  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:29.051736  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:29.051767  392749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-132595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-132595/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-132595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 12:02:29.185811  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 12:02:29.185838  392749 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 12:02:29.185872  392749 ubuntu.go:177] setting up certificates
	I0916 12:02:29.185882  392749 provision.go:84] configureAuth start
	I0916 12:02:29.185932  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:29.205104  392749 provision.go:143] copyHostCerts
	I0916 12:02:29.205177  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 12:02:29.205191  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 12:02:29.205266  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 12:02:29.205379  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 12:02:29.205393  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 12:02:29.205443  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 12:02:29.205574  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 12:02:29.205591  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 12:02:29.205628  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 12:02:29.205725  392749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.embed-certs-132595 san=[127.0.0.1 192.168.103.2 embed-certs-132595 localhost minikube]
	I0916 12:02:29.295413  392749 provision.go:177] copyRemoteCerts
	I0916 12:02:29.295493  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 12:02:29.295539  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.314056  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:29.410212  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 12:02:29.433316  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 12:02:29.457490  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 12:02:29.480514  392749 provision.go:87] duration metric: took 294.616578ms to configureAuth
	I0916 12:02:29.480546  392749 ubuntu.go:193] setting minikube options for container-runtime
	I0916 12:02:29.480721  392749 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:29.480840  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.499779  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:29.499970  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:29.499988  392749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 12:02:29.724131  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 12:02:29.724156  392749 machine.go:96] duration metric: took 1.016241182s to provisionDockerMachine
	I0916 12:02:29.724168  392749 client.go:171] duration metric: took 7.142967574s to LocalClient.Create
	I0916 12:02:29.724184  392749 start.go:167] duration metric: took 7.143028884s to libmachine.API.Create "embed-certs-132595"
	I0916 12:02:29.724192  392749 start.go:293] postStartSetup for "embed-certs-132595" (driver="docker")
	I0916 12:02:29.724206  392749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 12:02:29.724308  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 12:02:29.724425  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.742132  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:29.838555  392749 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 12:02:29.841984  392749 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 12:02:29.842030  392749 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 12:02:29.842042  392749 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 12:02:29.842049  392749 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 12:02:29.842061  392749 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 12:02:29.842134  392749 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 12:02:29.842223  392749 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 12:02:29.842335  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 12:02:29.850676  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:02:29.874023  392749 start.go:296] duration metric: took 149.81451ms for postStartSetup
	I0916 12:02:29.874395  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:29.891665  392749 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	I0916 12:02:29.891935  392749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 12:02:29.891976  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.910185  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.002481  392749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 12:02:30.007206  392749 start.go:128] duration metric: took 7.429401034s to createHost
	I0916 12:02:30.007234  392749 start.go:83] releasing machines lock for "embed-certs-132595", held for 7.4295318s
	I0916 12:02:30.007311  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:30.025002  392749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 12:02:30.025037  392749 ssh_runner.go:195] Run: cat /version.json
	I0916 12:02:30.025102  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:30.025103  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:30.043705  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.044185  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.210633  392749 ssh_runner.go:195] Run: systemctl --version
	I0916 12:02:30.215247  392749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 12:02:30.353292  392749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 12:02:30.357777  392749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:02:30.376319  392749 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 12:02:30.376406  392749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:02:30.406228  392749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 12:02:30.406253  392749 start.go:495] detecting cgroup driver to use...
	I0916 12:02:30.406283  392749 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 12:02:30.406323  392749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 12:02:30.421100  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 12:02:30.432505  392749 docker.go:217] disabling cri-docker service (if available) ...
	I0916 12:02:30.432561  392749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 12:02:30.445665  392749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 12:02:30.459366  392749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 12:02:30.541779  392749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 12:02:30.620528  392749 docker.go:233] disabling docker service ...
	I0916 12:02:30.620593  392749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 12:02:30.640092  392749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 12:02:30.651391  392749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 12:02:30.734601  392749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 12:02:30.821037  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 12:02:30.832165  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 12:02:30.847898  392749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 12:02:30.847957  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.858440  392749 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 12:02:30.858500  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.868040  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.877381  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.886632  392749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 12:02:30.895686  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.905708  392749 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.921634  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.931283  392749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 12:02:30.939458  392749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 12:02:30.947335  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:31.023886  392749 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 12:02:31.126953  392749 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 12:02:31.127024  392749 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 12:02:31.130456  392749 start.go:563] Will wait 60s for crictl version
	I0916 12:02:31.130515  392749 ssh_runner.go:195] Run: which crictl
	I0916 12:02:31.134039  392749 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 12:02:31.166783  392749 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 12:02:31.166863  392749 ssh_runner.go:195] Run: crio --version
	I0916 12:02:31.202361  392749 ssh_runner.go:195] Run: crio --version
	I0916 12:02:31.240370  392749 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 12:02:31.241854  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 12:02:31.258991  392749 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 12:02:31.262509  392749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:02:31.272708  392749 kubeadm.go:883] updating cluster {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 12:02:31.272831  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:31.272875  392749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:02:31.336596  392749 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:02:31.336618  392749 crio.go:433] Images already preloaded, skipping extraction
	I0916 12:02:31.336662  392749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:02:31.370353  392749 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:02:31.370403  392749 cache_images.go:84] Images are preloaded, skipping loading
	I0916 12:02:31.370410  392749 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 12:02:31.370494  392749 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-132595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 12:02:31.370555  392749 ssh_runner.go:195] Run: crio config
	I0916 12:02:31.414217  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:31.414235  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:31.414244  392749 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 12:02:31.414263  392749 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-132595 NodeName:embed-certs-132595 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 12:02:31.414385  392749 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-132595"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 12:02:31.414491  392749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 12:02:31.423224  392749 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 12:02:31.423288  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 12:02:31.431649  392749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I0916 12:02:31.448899  392749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 12:02:31.465819  392749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0916 12:02:31.484203  392749 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 12:02:31.487892  392749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:02:31.498931  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:31.578175  392749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:02:31.591266  392749 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595 for IP: 192.168.103.2
	I0916 12:02:31.591291  392749 certs.go:194] generating shared ca certs ...
	I0916 12:02:31.591306  392749 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.591451  392749 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 12:02:31.591500  392749 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 12:02:31.591510  392749 certs.go:256] generating profile certs ...
	I0916 12:02:31.591562  392749 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key
	I0916 12:02:31.591590  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt with IP's: []
	I0916 12:02:31.709220  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt ...
	I0916 12:02:31.709248  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt: {Name:mka1d5a1edf02835642de8bdc842db8cd676a26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.709443  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key ...
	I0916 12:02:31.709455  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key: {Name:mk9ae7714dfa095c3ad43e583257aef75ede0041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.709547  392749 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d
	I0916 12:02:31.709562  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 12:02:32.044005  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d ...
	I0916 12:02:32.044031  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d: {Name:mk6feeeb4fe0f8ff0e129b6995e86e98cd2ff58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.044202  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d ...
	I0916 12:02:32.044216  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d: {Name:mk11b35c16267750006ba91ba79ac0aeb369ed92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.044290  392749 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt
	I0916 12:02:32.044387  392749 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key
	I0916 12:02:32.044449  392749 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key
	I0916 12:02:32.044464  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt with IP's: []
	I0916 12:02:32.194505  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt ...
	I0916 12:02:32.194536  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt: {Name:mk85df2a3dc9e98fc7219fc1ae15551b09a34988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.194715  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key ...
	I0916 12:02:32.194728  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key: {Name:mk563636bd095728c8aa5b89edf7c40089c8fbee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.194890  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 12:02:32.194928  392749 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 12:02:32.194939  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 12:02:32.194961  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 12:02:32.194983  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 12:02:32.195002  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 12:02:32.195037  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:02:32.195649  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 12:02:32.220426  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 12:02:32.243923  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 12:02:32.267291  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 12:02:32.290723  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 12:02:32.316617  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 12:02:32.339515  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 12:02:32.362149  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 12:02:32.385198  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 12:02:32.407529  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 12:02:32.430567  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 12:02:32.454462  392749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 12:02:32.471580  392749 ssh_runner.go:195] Run: openssl version
	I0916 12:02:32.476960  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 12:02:32.485882  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.489223  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.489271  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.495629  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 12:02:32.505521  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 12:02:32.514505  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.517905  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.517965  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.524346  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 12:02:32.533311  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 12:02:32.542142  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.545523  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.545594  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.552365  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 12:02:32.561599  392749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 12:02:32.565017  392749 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 12:02:32.565075  392749 kubeadm.go:392] StartCluster: {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:02:32.565168  392749 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 12:02:32.565223  392749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 12:02:32.600725  392749 cri.go:89] found id: ""
	I0916 12:02:32.600782  392749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 12:02:32.610233  392749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 12:02:32.618479  392749 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 12:02:32.618526  392749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 12:02:32.626556  392749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 12:02:32.626577  392749 kubeadm.go:157] found existing configuration files:
	
	I0916 12:02:32.626634  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 12:02:32.634946  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 12:02:32.635015  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 12:02:32.644145  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 12:02:32.652426  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 12:02:32.652498  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 12:02:32.660703  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 12:02:32.669431  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 12:02:32.669498  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 12:02:32.678101  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 12:02:32.686434  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 12:02:32.686497  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 12:02:32.694571  392749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 12:02:32.733058  392749 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 12:02:32.733397  392749 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 12:02:32.749836  392749 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 12:02:32.749917  392749 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 12:02:32.749956  392749 kubeadm.go:310] OS: Linux
	I0916 12:02:32.750007  392749 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 12:02:32.750062  392749 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 12:02:32.750114  392749 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 12:02:32.750170  392749 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 12:02:32.750227  392749 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 12:02:32.750313  392749 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 12:02:32.750390  392749 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 12:02:32.750464  392749 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 12:02:32.750559  392749 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 12:02:32.804063  392749 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 12:02:32.804209  392749 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 12:02:32.804363  392749 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 12:02:32.810567  392749 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 12:02:32.813202  392749 out.go:235]   - Generating certificates and keys ...
	I0916 12:02:32.813305  392749 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 12:02:32.813417  392749 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 12:02:33.063616  392749 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 12:02:33.364335  392749 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 12:02:33.538855  392749 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 12:02:33.629054  392749 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 12:02:33.726242  392749 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 12:02:33.726358  392749 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-132595 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 12:02:33.819559  392749 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 12:02:33.819747  392749 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-132595 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 12:02:34.040985  392749 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 12:02:34.313148  392749 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 12:02:34.371964  392749 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 12:02:34.372034  392749 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 12:02:34.533586  392749 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 12:02:34.613255  392749 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 12:02:34.821003  392749 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 12:02:35.043370  392749 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 12:02:35.119304  392749 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 12:02:35.119834  392749 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 12:02:35.122405  392749 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 12:02:35.124431  392749 out.go:235]   - Booting up control plane ...
	I0916 12:02:35.124649  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 12:02:35.124761  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 12:02:35.125024  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 12:02:35.134352  392749 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 12:02:35.139484  392749 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 12:02:35.139586  392749 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 12:02:35.219432  392749 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 12:02:35.219608  392749 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 12:02:35.721291  392749 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.804738ms
	I0916 12:02:35.721439  392749 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 12:02:40.722572  392749 kubeadm.go:310] [api-check] The API server is healthy after 5.001398972s
	I0916 12:02:40.734247  392749 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 12:02:40.746322  392749 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 12:02:40.764937  392749 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 12:02:40.765155  392749 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-132595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 12:02:40.772968  392749 kubeadm.go:310] [bootstrap-token] Using token: 7gckm0.d0i7kpdezz05toci
	I0916 12:02:40.774166  392749 out.go:235]   - Configuring RBAC rules ...
	I0916 12:02:40.774305  392749 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 12:02:40.777587  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 12:02:40.783580  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 12:02:40.786051  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 12:02:40.788495  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 12:02:40.790852  392749 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 12:02:41.130233  392749 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 12:02:41.550016  392749 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 12:02:42.129585  392749 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 12:02:42.130524  392749 kubeadm.go:310] 
	I0916 12:02:42.130589  392749 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 12:02:42.130596  392749 kubeadm.go:310] 
	I0916 12:02:42.130670  392749 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 12:02:42.130680  392749 kubeadm.go:310] 
	I0916 12:02:42.130716  392749 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 12:02:42.130790  392749 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 12:02:42.130890  392749 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 12:02:42.130913  392749 kubeadm.go:310] 
	I0916 12:02:42.131015  392749 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 12:02:42.131033  392749 kubeadm.go:310] 
	I0916 12:02:42.131106  392749 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 12:02:42.131126  392749 kubeadm.go:310] 
	I0916 12:02:42.131207  392749 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 12:02:42.131315  392749 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 12:02:42.131419  392749 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 12:02:42.131430  392749 kubeadm.go:310] 
	I0916 12:02:42.131547  392749 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 12:02:42.131658  392749 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 12:02:42.131670  392749 kubeadm.go:310] 
	I0916 12:02:42.131792  392749 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7gckm0.d0i7kpdezz05toci \
	I0916 12:02:42.131887  392749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 12:02:42.131918  392749 kubeadm.go:310] 	--control-plane 
	I0916 12:02:42.131928  392749 kubeadm.go:310] 
	I0916 12:02:42.132048  392749 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 12:02:42.132058  392749 kubeadm.go:310] 
	I0916 12:02:42.132192  392749 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7gckm0.d0i7kpdezz05toci \
	I0916 12:02:42.132353  392749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 12:02:42.135019  392749 kubeadm.go:310] W0916 12:02:32.730384    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:02:42.135280  392749 kubeadm.go:310] W0916 12:02:32.731017    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:02:42.135512  392749 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 12:02:42.135628  392749 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 12:02:42.135653  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:42.135665  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:42.138549  392749 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 12:02:42.139971  392749 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 12:02:42.143956  392749 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 12:02:42.143980  392749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 12:02:42.161613  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 12:02:42.354438  392749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 12:02:42.354510  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:42.354533  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-132595 minikube.k8s.io/updated_at=2024_09_16T12_02_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=embed-certs-132595 minikube.k8s.io/primary=true
	I0916 12:02:42.504120  392749 ops.go:34] apiserver oom_adj: -16
	I0916 12:02:42.504136  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:43.004526  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:43.504277  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:44.004404  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:44.505245  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:45.004800  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:45.505119  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.004265  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.505238  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.574458  392749 kubeadm.go:1113] duration metric: took 4.220005009s to wait for elevateKubeSystemPrivileges
	I0916 12:02:46.574491  392749 kubeadm.go:394] duration metric: took 14.009421351s to StartCluster
	I0916 12:02:46.574511  392749 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:46.574575  392749 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:02:46.576211  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:46.576447  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 12:02:46.576447  392749 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:02:46.576532  392749 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 12:02:46.576624  392749 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132595"
	I0916 12:02:46.576643  392749 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132595"
	I0916 12:02:46.576668  392749 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:46.576688  392749 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132595"
	I0916 12:02:46.576656  392749 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-132595"
	I0916 12:02:46.576774  392749 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:02:46.577752  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.577919  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.579915  392749 out.go:177] * Verifying Kubernetes components...
	I0916 12:02:46.581417  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:46.602279  392749 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 12:02:46.603012  392749 addons.go:234] Setting addon default-storageclass=true in "embed-certs-132595"
	I0916 12:02:46.603054  392749 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:02:46.603533  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.603631  392749 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:02:46.603648  392749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 12:02:46.603704  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:46.623554  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:46.627265  392749 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 12:02:46.627294  392749 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 12:02:46.627365  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:46.652386  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:46.708797  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 12:02:46.808404  392749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:02:46.819270  392749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:02:47.003989  392749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 12:02:47.322159  392749 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 12:02:47.325904  392749 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:02:47.699966  392749 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 12:02:47.701431  392749 addons.go:510] duration metric: took 1.124891375s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 12:02:47.827676  392749 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-132595" context rescaled to 1 replicas
	I0916 12:02:49.329713  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:51.829363  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:53.829520  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:55.843996  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:58.329318  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:00.329424  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:02.329815  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:04.829037  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:06.829132  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:08.829193  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:11.329223  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:13.829206  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:16.329632  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:18.829227  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:21.329475  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:23.829047  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:25.829344  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:27.830100  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:28.329181  392749 node_ready.go:49] node "embed-certs-132595" has status "Ready":"True"
	I0916 12:03:28.329205  392749 node_ready.go:38] duration metric: took 41.003271788s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:03:28.329215  392749 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:03:28.335458  392749 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.841863  392749 pod_ready.go:93] pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.841891  392749 pod_ready.go:82] duration metric: took 1.506402305s for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.841902  392749 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.846650  392749 pod_ready.go:93] pod "etcd-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.846672  392749 pod_ready.go:82] duration metric: took 4.765058ms for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.846685  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.851324  392749 pod_ready.go:93] pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.851348  392749 pod_ready.go:82] duration metric: took 4.655631ms for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.851361  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.855398  392749 pod_ready.go:93] pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.855418  392749 pod_ready.go:82] duration metric: took 4.049899ms for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.855427  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.929577  392749 pod_ready.go:93] pod "kube-proxy-5jjq9" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.929598  392749 pod_ready.go:82] duration metric: took 74.164746ms for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.929610  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:30.329892  392749 pod_ready.go:93] pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:30.329915  392749 pod_ready.go:82] duration metric: took 400.298548ms for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:30.329926  392749 pod_ready.go:39] duration metric: took 2.000698892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:03:30.329943  392749 api_server.go:52] waiting for apiserver process to appear ...
	I0916 12:03:30.329991  392749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 12:03:30.341465  392749 api_server.go:72] duration metric: took 43.764981639s to wait for apiserver process to appear ...
	I0916 12:03:30.341495  392749 api_server.go:88] waiting for apiserver healthz status ...
	I0916 12:03:30.341517  392749 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 12:03:30.346926  392749 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 12:03:30.347857  392749 api_server.go:141] control plane version: v1.31.1
	I0916 12:03:30.347882  392749 api_server.go:131] duration metric: took 6.380265ms to wait for apiserver health ...
	I0916 12:03:30.347891  392749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 12:03:30.533749  392749 system_pods.go:59] 8 kube-system pods found
	I0916 12:03:30.533781  392749 system_pods.go:61] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:03:30.533787  392749 system_pods.go:61] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:03:30.533791  392749 system_pods.go:61] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:03:30.533795  392749 system_pods.go:61] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:03:30.533798  392749 system_pods.go:61] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:03:30.533801  392749 system_pods.go:61] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:03:30.533805  392749 system_pods.go:61] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:03:30.533808  392749 system_pods.go:61] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:03:30.533814  392749 system_pods.go:74] duration metric: took 185.917389ms to wait for pod list to return data ...
	I0916 12:03:30.533821  392749 default_sa.go:34] waiting for default service account to be created ...
	I0916 12:03:30.730240  392749 default_sa.go:45] found service account: "default"
	I0916 12:03:30.730269  392749 default_sa.go:55] duration metric: took 196.441382ms for default service account to be created ...
	I0916 12:03:30.730278  392749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 12:03:30.932106  392749 system_pods.go:86] 8 kube-system pods found
	I0916 12:03:30.932141  392749 system_pods.go:89] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:03:30.932149  392749 system_pods.go:89] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:03:30.932155  392749 system_pods.go:89] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:03:30.932160  392749 system_pods.go:89] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:03:30.932165  392749 system_pods.go:89] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:03:30.932170  392749 system_pods.go:89] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:03:30.932175  392749 system_pods.go:89] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:03:30.932180  392749 system_pods.go:89] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:03:30.932189  392749 system_pods.go:126] duration metric: took 201.903374ms to wait for k8s-apps to be running ...
	I0916 12:03:30.932199  392749 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 12:03:30.932250  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 12:03:30.943724  392749 system_svc.go:56] duration metric: took 11.513209ms WaitForService to wait for kubelet
	I0916 12:03:30.943753  392749 kubeadm.go:582] duration metric: took 44.367276865s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:03:30.943776  392749 node_conditions.go:102] verifying NodePressure condition ...
	I0916 12:03:31.130459  392749 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 12:03:31.130490  392749 node_conditions.go:123] node cpu capacity is 8
	I0916 12:03:31.130506  392749 node_conditions.go:105] duration metric: took 186.72463ms to run NodePressure ...
	I0916 12:03:31.130519  392749 start.go:241] waiting for startup goroutines ...
	I0916 12:03:31.130528  392749 start.go:246] waiting for cluster config update ...
	I0916 12:03:31.130542  392749 start.go:255] writing updated cluster config ...
	I0916 12:03:31.130846  392749 ssh_runner.go:195] Run: rm -f paused
	I0916 12:03:31.136980  392749 out.go:177] * Done! kubectl is now configured to use "embed-certs-132595" cluster and "default" namespace by default
	E0916 12:03:31.138336  392749 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.568561957Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=86d2dd83-a69f-4100-8a41-c43a51cc8aed name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.570462648Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-lmhpj Namespace:kube-system ID:0699e4a1527a5e22ec2c5a3eae9411ebd5f65603f653f4ead716e20f5f2ea774 UID:dec7e28f-bb5b-4238-abf8-a17607466015 NetNS:/var/run/netns/f5817722-6ecf-471c-a9b7-43486a0002b3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.570588826Z" level=info msg="Checking pod kube-system_coredns-7c65d6cfc9-lmhpj for CNI network kindnet (type=ptp)"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.572423980Z" level=info msg="Ran pod sandbox e29106fe85d76fa4de619461e9d9494576a9dccd6c91884e0f1da2ca7d20785d with infra container: kube-system/storage-provisioner/POD" id=86d2dd83-a69f-4100-8a41-c43a51cc8aed name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.573376430Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb33efd2-983e-4902-bc01-08f82368b237 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.573461657Z" level=info msg="Ran pod sandbox 0699e4a1527a5e22ec2c5a3eae9411ebd5f65603f653f4ead716e20f5f2ea774 with infra container: kube-system/coredns-7c65d6cfc9-lmhpj/POD" id=cbe6f7c1-4df2-4f71-94f9-b99abe59bc6d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.573660173Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bb33efd2-983e-4902-bc01-08f82368b237 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574348611Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=501e55a1-9662-4590-b631-8ced001643cb name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574410998Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=a456219a-02a2-49f9-9936-650df962db25 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574580246Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=a456219a-02a2-49f9-9936-650df962db25 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574605317Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=501e55a1-9662-4590-b631-8ced001643cb name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575178636Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=c366e22a-5e1f-491e-9035-6fd374bfe7b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575303969Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=59687f8b-e7c9-4348-8d28-901638ae293b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575368833Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c366e22a-5e1f-491e-9035-6fd374bfe7b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575400341Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.576002923Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-lmhpj/coredns" id=8d08e494-ec0b-4b6d-be32-c6718a9b0d95 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.576092718Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.586948820Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e375941d0176fe56097181bec36e35d52a5a7d8cd1d147901099904551bc4537/merged/etc/passwd: no such file or directory"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.586986448Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e375941d0176fe56097181bec36e35d52a5a7d8cd1d147901099904551bc4537/merged/etc/group: no such file or directory"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.628157312Z" level=info msg="Created container 04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02: kube-system/storage-provisioner/storage-provisioner" id=59687f8b-e7c9-4348-8d28-901638ae293b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.628774234Z" level=info msg="Starting container: 04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02" id=c6eae36f-652b-4bb4-ba8e-a3d4aef3a5a8 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.636335332Z" level=info msg="Started container" PID=2235 containerID=04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02 description=kube-system/storage-provisioner/storage-provisioner id=c6eae36f-652b-4bb4-ba8e-a3d4aef3a5a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e29106fe85d76fa4de619461e9d9494576a9dccd6c91884e0f1da2ca7d20785d
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.638679752Z" level=info msg="Created container 6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454: kube-system/coredns-7c65d6cfc9-lmhpj/coredns" id=8d08e494-ec0b-4b6d-be32-c6718a9b0d95 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.639344270Z" level=info msg="Starting container: 6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454" id=08658758-7578-4590-a126-dd9c8a60b47c name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.646205266Z" level=info msg="Started container" PID=2253 containerID=6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454 description=kube-system/coredns-7c65d6cfc9-lmhpj/coredns id=08658758-7578-4590-a126-dd9c8a60b47c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0699e4a1527a5e22ec2c5a3eae9411ebd5f65603f653f4ead716e20f5f2ea774
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6198a816b1cdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   0                   0699e4a1527a5       coredns-7c65d6cfc9-lmhpj
	04bb82a52f980       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       0                   e29106fe85d76       storage-provisioner
	7e5dd3f1d7192       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   44 seconds ago      Running             kindnet-cni               0                   71c9e456d6bf1       kindnet-s4vkq
	044a317804ef8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   44 seconds ago      Running             kube-proxy                0                   d7e6cbd74393e       kube-proxy-5jjq9
	aa00ff4074279       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   55 seconds ago      Running             etcd                      0                   079241af2ce23       etcd-embed-certs-132595
	43acde1f85a74       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   55 seconds ago      Running             kube-controller-manager   0                   9e06df2fe014b       kube-controller-manager-embed-certs-132595
	2e260ff8685de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   55 seconds ago      Running             kube-scheduler            0                   f4b76295d450a       kube-scheduler-embed-certs-132595
	09176dad2cb1c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   55 seconds ago      Running             kube-apiserver            0                   06c0e46f86ec9       kube-apiserver-embed-certs-132595
	
	
	==> coredns [6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43571 - 19451 "HINFO IN 8873162753112370163.8194975584790532838. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011764927s
	
	
	==> describe nodes <==
	Name:               embed-certs-132595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-132595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-132595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T12_02_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 12:02:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-132595
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 12:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-132595
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdbbf6049dff4c2fbfb05ee6d4e44c79
	  System UUID:                ac9bc1b7-26e7-4faa-ad97-c61b5564343d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-lmhpj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     46s
	  kube-system                 etcd-embed-certs-132595                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         51s
	  kube-system                 kindnet-s4vkq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      46s
	  kube-system                 kube-apiserver-embed-certs-132595             250m (3%)     0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-embed-certs-132595    200m (2%)     0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-proxy-5jjq9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-scheduler-embed-certs-132595             100m (1%)     0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 44s                kube-proxy       
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x7 over 57s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   Starting                 51s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 51s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  51s                kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    51s                kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     51s                kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                node-controller  Node embed-certs-132595 event: Registered Node embed-certs-132595 in Controller
	  Normal   NodeReady                4s                 kubelet          Node embed-certs-132595 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.954619] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.059994] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000007] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +6.207537] net_ratelimit: 5 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +8.191403] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.003944] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [aa00ff407427948b5e089e635cd56649f686d7a6c9e475586db68aae2101c56f] <==
	{"level":"info","ts":"2024-09-16T12:02:36.601518Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T12:02:36.601616Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:02:36.601663Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:02:36.601767Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T12:02:36.601799Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T12:02:37.030837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.032010Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.032867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:02:37.032863Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-132595 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T12:02:37.032902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:02:37.033174Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T12:02:37.033206Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T12:02:37.033382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.033486Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.033512Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.034156Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:02:37.034157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:02:37.035300Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T12:02:37.035303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:03:32 up  1:45,  0 users,  load average: 1.47, 1.14, 0.98
	Linux embed-certs-132595 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7e5dd3f1d71925f826db082ad675d3101e7aae3acce70fd7b76a514c9a89f6fd] <==
	W0916 12:03:18.015731       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015833       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015860       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015837       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:18.015898       1 trace.go:236] Trace[31706214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[31706214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[31706214]: [30.001681318s] [30.001681318s] END
	I0916 12:03:18.015898       1 trace.go:236] Trace[592354911]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[592354911]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[592354911]: [30.001620265s] [30.001620265s] END
	E0916 12:03:18.015924       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 12:03:18.015923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:18.015935       1 trace.go:236] Trace[1491040377]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[1491040377]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[1491040377]: [30.001712492s] [30.001712492s] END
	I0916 12:03:18.015935       1 trace.go:236] Trace[2145919099]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[2145919099]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[2145919099]: [30.001694697s] [30.001694697s] END
	E0916 12:03:18.015954       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 12:03:18.015960       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:19.315171       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 12:03:19.315197       1 metrics.go:61] Registering metrics
	I0916 12:03:19.315264       1 controller.go:374] Syncing nftables rules
	I0916 12:03:28.021447       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:03:28.021489       1 main.go:299] handling current node
	
	
	==> kube-apiserver [09176dad2cb1c2af9cd3430fb4f7fca0bd2ff37e3126706cf8504f9f1f4f54cc] <==
	I0916 12:02:39.038473       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 12:02:39.038479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 12:02:39.038485       1 cache.go:39] Caches are synced for autoregister controller
	I0916 12:02:39.093479       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 12:02:39.093511       1 policy_source.go:224] refreshing policies
	I0916 12:02:39.093524       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0916 12:02:39.096961       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0916 12:02:39.097047       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 12:02:39.140068       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 12:02:39.299961       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 12:02:39.942205       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 12:02:39.946102       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 12:02:39.946121       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 12:02:40.388883       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 12:02:40.428179       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 12:02:40.547548       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 12:02:40.553786       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 12:02:40.554918       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 12:02:40.559315       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 12:02:41.002384       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 12:02:41.536771       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 12:02:41.548758       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 12:02:41.557550       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 12:02:46.656584       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 12:02:46.807047       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [43acde1f85a74d4bd7d60bb7ed1dbd6e59079441a502cee1be854f6abe5e35b6] <==
	I0916 12:02:46.002200       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-132595"
	I0916 12:02:46.002244       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0916 12:02:46.003457       1 shared_informer.go:320] Caches are synced for job
	I0916 12:02:46.004658       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 12:02:46.004668       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 12:02:46.005761       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 12:02:46.362324       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 12:02:46.362359       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 12:02:46.373448       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 12:02:46.510926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:02:47.005464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="311.027327ms"
	I0916 12:02:47.016485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.96288ms"
	I0916 12:02:47.016598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.136µs"
	I0916 12:02:47.016676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="46.065µs"
	I0916 12:02:47.095940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.503µs"
	I0916 12:02:47.497738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.189918ms"
	I0916 12:02:47.507073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.188248ms"
	I0916 12:02:47.507293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.054µs"
	I0916 12:03:28.228034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:03:28.237164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:03:28.243973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="113.42µs"
	I0916 12:03:28.254166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.618µs"
	I0916 12:03:29.601516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.101551ms"
	I0916 12:03:29.601730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.683µs"
	I0916 12:03:31.009954       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [044a317804ef8bd211cafdc21ae7bf14d25d5e48ffbf28d2a623796fc0f3bec3] <==
	I0916 12:02:47.629252       1 server_linux.go:66] "Using iptables proxy"
	I0916 12:02:47.753705       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 12:02:47.753773       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 12:02:47.773672       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 12:02:47.773734       1 server_linux.go:169] "Using iptables Proxier"
	I0916 12:02:47.775582       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 12:02:47.775952       1 server.go:483] "Version info" version="v1.31.1"
	I0916 12:02:47.775990       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 12:02:47.778496       1 config.go:105] "Starting endpoint slice config controller"
	I0916 12:02:47.778576       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 12:02:47.778508       1 config.go:328] "Starting node config controller"
	I0916 12:02:47.778658       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 12:02:47.778538       1 config.go:199] "Starting service config controller"
	I0916 12:02:47.778717       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 12:02:47.879076       1 shared_informer.go:320] Caches are synced for service config
	I0916 12:02:47.879101       1 shared_informer.go:320] Caches are synced for node config
	I0916 12:02:47.879085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2e260ff8685de88af344ef117d8cbfa1ff17b511040b27ce76779a023b1eaa4d] <==
	W0916 12:02:39.021480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 12:02:39.021498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.021539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 12:02:39.021562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.021575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 12:02:39.021601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.859282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 12:02:39.859324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.895908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 12:02:39.895947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.912462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 12:02:39.912503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.947122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 12:02:39.947168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.965072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 12:02:39.965130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.967121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 12:02:39.967256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.977801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 12:02:39.977847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:40.200165       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 12:02:40.200204       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 12:02:40.236012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 12:02:40.236065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 12:02:42.917675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995279    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6lb\" (UniqueName: \"kubernetes.io/projected/da63c6b0-19b1-4ab0-abc4-ac2b785e8e88-kube-api-access-ml6lb\") pod \"kube-proxy-5jjq9\" (UID: \"da63c6b0-19b1-4ab0-abc4-ac2b785e8e88\") " pod="kube-system/kube-proxy-5jjq9"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995363    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da63c6b0-19b1-4ab0-abc4-ac2b785e8e88-xtables-lock\") pod \"kube-proxy-5jjq9\" (UID: \"da63c6b0-19b1-4ab0-abc4-ac2b785e8e88\") " pod="kube-system/kube-proxy-5jjq9"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995413    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a7383ab-18b0-4118-9810-ff1cbbdd9ecf-xtables-lock\") pod \"kindnet-s4vkq\" (UID: \"8a7383ab-18b0-4118-9810-ff1cbbdd9ecf\") " pod="kube-system/kindnet-s4vkq"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995449    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da63c6b0-19b1-4ab0-abc4-ac2b785e8e88-lib-modules\") pod \"kube-proxy-5jjq9\" (UID: \"da63c6b0-19b1-4ab0-abc4-ac2b785e8e88\") " pod="kube-system/kube-proxy-5jjq9"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995472    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8a7383ab-18b0-4118-9810-ff1cbbdd9ecf-cni-cfg\") pod \"kindnet-s4vkq\" (UID: \"8a7383ab-18b0-4118-9810-ff1cbbdd9ecf\") " pod="kube-system/kindnet-s4vkq"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995497    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nf57\" (UniqueName: \"kubernetes.io/projected/8a7383ab-18b0-4118-9810-ff1cbbdd9ecf-kube-api-access-9nf57\") pod \"kindnet-s4vkq\" (UID: \"8a7383ab-18b0-4118-9810-ff1cbbdd9ecf\") " pod="kube-system/kindnet-s4vkq"
	Sep 16 12:02:47 embed-certs-132595 kubelet[1652]: I0916 12:02:47.103151    1652 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 12:02:48 embed-certs-132595 kubelet[1652]: I0916 12:02:48.507465    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s4vkq" podStartSLOduration=2.507441772 podStartE2EDuration="2.507441772s" podCreationTimestamp="2024-09-16 12:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:02:48.507392289 +0000 UTC m=+7.183060533" watchObservedRunningTime="2024-09-16 12:02:48.507441772 +0000 UTC m=+7.183110017"
	Sep 16 12:02:48 embed-certs-132595 kubelet[1652]: I0916 12:02:48.517119    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jjq9" podStartSLOduration=2.5170927770000002 podStartE2EDuration="2.517092777s" podCreationTimestamp="2024-09-16 12:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:02:48.517001015 +0000 UTC m=+7.192669259" watchObservedRunningTime="2024-09-16 12:02:48.517092777 +0000 UTC m=+7.192761020"
	Sep 16 12:02:51 embed-certs-132595 kubelet[1652]: E0916 12:02:51.434502    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488171434318936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:02:51 embed-certs-132595 kubelet[1652]: E0916 12:02:51.434548    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488171434318936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:01 embed-certs-132595 kubelet[1652]: E0916 12:03:01.435826    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488181435634385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:01 embed-certs-132595 kubelet[1652]: E0916 12:03:01.435872    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488181435634385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:11 embed-certs-132595 kubelet[1652]: E0916 12:03:11.437372    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488191437189942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:11 embed-certs-132595 kubelet[1652]: E0916 12:03:11.437413    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488191437189942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:21 embed-certs-132595 kubelet[1652]: E0916 12:03:21.438567    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488201438406245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:21 embed-certs-132595 kubelet[1652]: E0916 12:03:21.438612    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488201438406245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.218921    1652 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387462    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec7e28f-bb5b-4238-abf8-a17607466015-config-volume\") pod \"coredns-7c65d6cfc9-lmhpj\" (UID: \"dec7e28f-bb5b-4238-abf8-a17607466015\") " pod="kube-system/coredns-7c65d6cfc9-lmhpj"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387509    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2qs\" (UniqueName: \"kubernetes.io/projected/dec7e28f-bb5b-4238-abf8-a17607466015-kube-api-access-qm2qs\") pod \"coredns-7c65d6cfc9-lmhpj\" (UID: \"dec7e28f-bb5b-4238-abf8-a17607466015\") " pod="kube-system/coredns-7c65d6cfc9-lmhpj"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387532    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b94fecd1-4b72-474b-9296-fb5c86912f64-tmp\") pod \"storage-provisioner\" (UID: \"b94fecd1-4b72-474b-9296-fb5c86912f64\") " pod="kube-system/storage-provisioner"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387546    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w5lv\" (UniqueName: \"kubernetes.io/projected/b94fecd1-4b72-474b-9296-fb5c86912f64-kube-api-access-2w5lv\") pod \"storage-provisioner\" (UID: \"b94fecd1-4b72-474b-9296-fb5c86912f64\") " pod="kube-system/storage-provisioner"
	Sep 16 12:03:29 embed-certs-132595 kubelet[1652]: I0916 12:03:29.584364    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.584345227 podStartE2EDuration="42.584345227s" podCreationTimestamp="2024-09-16 12:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:03:29.58423399 +0000 UTC m=+48.259902234" watchObservedRunningTime="2024-09-16 12:03:29.584345227 +0000 UTC m=+48.260013470"
	Sep 16 12:03:31 embed-certs-132595 kubelet[1652]: E0916 12:03:31.439757    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488211439535902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:31 embed-certs-132595 kubelet[1652]: E0916 12:03:31.439799    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488211439535902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02] <==
	I0916 12:03:28.648757       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 12:03:28.659095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 12:03:28.659135       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 12:03:28.701058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 12:03:28.701294       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794!
	I0916 12:03:28.701548       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"145c877d-a7a1-47fc-887a-f3ff6cf439ce", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794 became leader
	I0916 12:03:28.801622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (574.931µs)
helpers_test.go:263: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-132595
helpers_test.go:235: (dbg) docker inspect embed-certs-132595:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95",
	        "Created": "2024-09-16T12:02:27.844570227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393450,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T12:02:27.964272788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hosts",
	        "LogPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95-json.log",
	        "Name": "/embed-certs-132595",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-132595:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-132595",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-132595",
	                "Source": "/var/lib/docker/volumes/embed-certs-132595/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-132595",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-132595",
	                "name.minikube.sigs.k8s.io": "embed-certs-132595",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8051876631e629be3d63d04a25b08c24b1f81adc45f3ad239f7bc136e91b56ad",
	            "SandboxKey": "/var/run/docker/netns/8051876631e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-132595": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2bfc3c9091b0bc051827133f808c3cb85965e63d2bf1e9667fc1a6a160dc08f4",
	                    "EndpointID": "2e4a82502e88e3414290611bf291eaf399e6bd167c079853617718aca5cc9c76",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-132595",
	                        "9f079caa1423"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25: (1.11066531s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-451928  | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-451928       | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-451928                           | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-483277             | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-483277                  | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-483277 image list                           | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| start   | -p embed-certs-132595                                  | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 12:02:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 12:02:22.316707  392749 out.go:345] Setting OutFile to fd 1 ...
	I0916 12:02:22.316980  392749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:02:22.316990  392749 out.go:358] Setting ErrFile to fd 2...
	I0916 12:02:22.316994  392749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:02:22.317211  392749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 12:02:22.317988  392749 out.go:352] Setting JSON to false
	I0916 12:02:22.319189  392749 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6282,"bootTime":1726481860,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 12:02:22.319253  392749 start.go:139] virtualization: kvm guest
	I0916 12:02:22.321724  392749 out.go:177] * [embed-certs-132595] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 12:02:22.323580  392749 notify.go:220] Checking for updates...
	I0916 12:02:22.323619  392749 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 12:02:22.325184  392749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 12:02:22.326831  392749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:02:22.328293  392749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 12:02:22.329741  392749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 12:02:22.331375  392749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 12:02:22.333444  392749 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333594  392749 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333730  392749 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333861  392749 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 12:02:22.357827  392749 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 12:02:22.357973  392749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:02:22.415015  392749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:02:22.404189354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:02:22.415142  392749 docker.go:318] overlay module found
	I0916 12:02:22.418459  392749 out.go:177] * Using the docker driver based on user configuration
	I0916 12:02:22.420009  392749 start.go:297] selected driver: docker
	I0916 12:02:22.420030  392749 start.go:901] validating driver "docker" against <nil>
	I0916 12:02:22.420041  392749 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 12:02:22.420849  392749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:02:22.481968  392749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:02:22.472332251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:02:22.482174  392749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 12:02:22.482464  392749 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:02:22.484723  392749 out.go:177] * Using Docker driver with root privileges
	I0916 12:02:22.486426  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:22.486474  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:22.486482  392749 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 12:02:22.486556  392749 start.go:340] cluster config:
	{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:02:22.488572  392749 out.go:177] * Starting "embed-certs-132595" primary control-plane node in "embed-certs-132595" cluster
	I0916 12:02:22.490260  392749 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 12:02:22.492012  392749 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 12:02:22.493615  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:22.493670  392749 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 12:02:22.493684  392749 cache.go:56] Caching tarball of preloaded images
	I0916 12:02:22.493725  392749 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 12:02:22.493780  392749 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 12:02:22.493797  392749 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 12:02:22.493914  392749 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	I0916 12:02:22.493936  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json: {Name:mk85e2df12eb3418e581ab1558bdddacab4821d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 12:02:22.516611  392749 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 12:02:22.516634  392749 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 12:02:22.516701  392749 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 12:02:22.516717  392749 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 12:02:22.516721  392749 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 12:02:22.516728  392749 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 12:02:22.516735  392749 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 12:02:22.577454  392749 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 12:02:22.577503  392749 cache.go:194] Successfully downloaded all kic artifacts
	I0916 12:02:22.577543  392749 start.go:360] acquireMachinesLock for embed-certs-132595: {Name:mk90285717afa09eeba6eb1eaf13ca243fd0e8ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:02:22.577688  392749 start.go:364] duration metric: took 123.446µs to acquireMachinesLock for "embed-certs-132595"
	I0916 12:02:22.577716  392749 start.go:93] Provisioning new machine with config: &{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:02:22.577790  392749 start.go:125] createHost starting for "" (driver="docker")
	I0916 12:02:22.580825  392749 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 12:02:22.581158  392749 start.go:159] libmachine.API.Create for "embed-certs-132595" (driver="docker")
	I0916 12:02:22.581194  392749 client.go:168] LocalClient.Create starting
	I0916 12:02:22.581279  392749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 12:02:22.581315  392749 main.go:141] libmachine: Decoding PEM data...
	I0916 12:02:22.581364  392749 main.go:141] libmachine: Parsing certificate...
	I0916 12:02:22.581424  392749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 12:02:22.581453  392749 main.go:141] libmachine: Decoding PEM data...
	I0916 12:02:22.581469  392749 main.go:141] libmachine: Parsing certificate...
	I0916 12:02:22.581917  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 12:02:22.601058  392749 cli_runner.go:211] docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 12:02:22.601120  392749 network_create.go:284] running [docker network inspect embed-certs-132595] to gather additional debugging logs...
	I0916 12:02:22.601136  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595
	W0916 12:02:22.619588  392749 cli_runner.go:211] docker network inspect embed-certs-132595 returned with exit code 1
	I0916 12:02:22.619629  392749 network_create.go:287] error running [docker network inspect embed-certs-132595]: docker network inspect embed-certs-132595: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-132595 not found
	I0916 12:02:22.619641  392749 network_create.go:289] output of [docker network inspect embed-certs-132595]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-132595 not found
	
	** /stderr **
	I0916 12:02:22.619744  392749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 12:02:22.638437  392749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 12:02:22.639338  392749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 12:02:22.640220  392749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 12:02:22.641011  392749 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 12:02:22.641944  392749 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 12:02:22.642797  392749 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 12:02:22.643883  392749 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023ed510}
	I0916 12:02:22.643904  392749 network_create.go:124] attempt to create docker network embed-certs-132595 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 12:02:22.643965  392749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-132595 embed-certs-132595
	I0916 12:02:22.717370  392749 network_create.go:108] docker network embed-certs-132595 192.168.103.0/24 created
	I0916 12:02:22.717419  392749 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-132595" container
	I0916 12:02:22.717475  392749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 12:02:22.739425  392749 cli_runner.go:164] Run: docker volume create embed-certs-132595 --label name.minikube.sigs.k8s.io=embed-certs-132595 --label created_by.minikube.sigs.k8s.io=true
	I0916 12:02:22.758826  392749 oci.go:103] Successfully created a docker volume embed-certs-132595
	I0916 12:02:22.758921  392749 cli_runner.go:164] Run: docker run --rm --name embed-certs-132595-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-132595 --entrypoint /usr/bin/test -v embed-certs-132595:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 12:02:23.286517  392749 oci.go:107] Successfully prepared a docker volume embed-certs-132595
	I0916 12:02:23.286582  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:23.286608  392749 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 12:02:23.286686  392749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-132595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 12:02:27.777252  392749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-132595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490517682s)
	I0916 12:02:27.777293  392749 kic.go:203] duration metric: took 4.490683033s to extract preloaded images to volume ...
	W0916 12:02:27.777479  392749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 12:02:27.777606  392749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 12:02:27.828245  392749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-132595 --name embed-certs-132595 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-132595 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-132595 --network embed-certs-132595 --ip 192.168.103.2 --volume embed-certs-132595:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 12:02:28.129271  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Running}}
	I0916 12:02:28.148758  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.168574  392749 cli_runner.go:164] Run: docker exec embed-certs-132595 stat /var/lib/dpkg/alternatives/iptables
	I0916 12:02:28.214356  392749 oci.go:144] the created container "embed-certs-132595" has a running status.
	I0916 12:02:28.214398  392749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa...
	I0916 12:02:28.579373  392749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 12:02:28.600739  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.623045  392749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 12:02:28.623068  392749 kic_runner.go:114] Args: [docker exec --privileged embed-certs-132595 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 12:02:28.687280  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.707892  392749 machine.go:93] provisionDockerMachine start ...
	I0916 12:02:28.707978  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:28.730282  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:28.730549  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:28.730566  392749 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 12:02:28.864997  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:02:28.865036  392749 ubuntu.go:169] provisioning hostname "embed-certs-132595"
	I0916 12:02:28.865105  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:28.884140  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:28.884312  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:28.884326  392749 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-132595 && echo "embed-certs-132595" | sudo tee /etc/hostname
	I0916 12:02:29.033007  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:02:29.033095  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.051460  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:29.051736  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:29.051767  392749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-132595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-132595/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-132595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 12:02:29.185811  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 12:02:29.185838  392749 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 12:02:29.185872  392749 ubuntu.go:177] setting up certificates
	I0916 12:02:29.185882  392749 provision.go:84] configureAuth start
	I0916 12:02:29.185932  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:29.205104  392749 provision.go:143] copyHostCerts
	I0916 12:02:29.205177  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 12:02:29.205191  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 12:02:29.205266  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 12:02:29.205379  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 12:02:29.205393  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 12:02:29.205443  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 12:02:29.205574  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 12:02:29.205591  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 12:02:29.205628  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 12:02:29.205725  392749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.embed-certs-132595 san=[127.0.0.1 192.168.103.2 embed-certs-132595 localhost minikube]
	I0916 12:02:29.295413  392749 provision.go:177] copyRemoteCerts
	I0916 12:02:29.295493  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 12:02:29.295539  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.314056  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:29.410212  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 12:02:29.433316  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 12:02:29.457490  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 12:02:29.480514  392749 provision.go:87] duration metric: took 294.616578ms to configureAuth
	I0916 12:02:29.480546  392749 ubuntu.go:193] setting minikube options for container-runtime
	I0916 12:02:29.480721  392749 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:29.480840  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.499779  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:29.499970  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:29.499988  392749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 12:02:29.724131  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 12:02:29.724156  392749 machine.go:96] duration metric: took 1.016241182s to provisionDockerMachine
	I0916 12:02:29.724168  392749 client.go:171] duration metric: took 7.142967574s to LocalClient.Create
	I0916 12:02:29.724184  392749 start.go:167] duration metric: took 7.143028884s to libmachine.API.Create "embed-certs-132595"
	I0916 12:02:29.724192  392749 start.go:293] postStartSetup for "embed-certs-132595" (driver="docker")
	I0916 12:02:29.724206  392749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 12:02:29.724308  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 12:02:29.724425  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.742132  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:29.838555  392749 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 12:02:29.841984  392749 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 12:02:29.842030  392749 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 12:02:29.842042  392749 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 12:02:29.842049  392749 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 12:02:29.842061  392749 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 12:02:29.842134  392749 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 12:02:29.842223  392749 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 12:02:29.842335  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 12:02:29.850676  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:02:29.874023  392749 start.go:296] duration metric: took 149.81451ms for postStartSetup
	I0916 12:02:29.874395  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:29.891665  392749 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	I0916 12:02:29.891935  392749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 12:02:29.891976  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.910185  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.002481  392749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 12:02:30.007206  392749 start.go:128] duration metric: took 7.429401034s to createHost
	I0916 12:02:30.007234  392749 start.go:83] releasing machines lock for "embed-certs-132595", held for 7.4295318s
	I0916 12:02:30.007311  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:30.025002  392749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 12:02:30.025037  392749 ssh_runner.go:195] Run: cat /version.json
	I0916 12:02:30.025102  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:30.025103  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:30.043705  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.044185  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.210633  392749 ssh_runner.go:195] Run: systemctl --version
	I0916 12:02:30.215247  392749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 12:02:30.353292  392749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 12:02:30.357777  392749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:02:30.376319  392749 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 12:02:30.376406  392749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:02:30.406228  392749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 12:02:30.406253  392749 start.go:495] detecting cgroup driver to use...
	I0916 12:02:30.406283  392749 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 12:02:30.406323  392749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 12:02:30.421100  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 12:02:30.432505  392749 docker.go:217] disabling cri-docker service (if available) ...
	I0916 12:02:30.432561  392749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 12:02:30.445665  392749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 12:02:30.459366  392749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 12:02:30.541779  392749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 12:02:30.620528  392749 docker.go:233] disabling docker service ...
	I0916 12:02:30.620593  392749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 12:02:30.640092  392749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 12:02:30.651391  392749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 12:02:30.734601  392749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 12:02:30.821037  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 12:02:30.832165  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 12:02:30.847898  392749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 12:02:30.847957  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.858440  392749 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 12:02:30.858500  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.868040  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.877381  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.886632  392749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 12:02:30.895686  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.905708  392749 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.921634  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.931283  392749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 12:02:30.939458  392749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 12:02:30.947335  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:31.023886  392749 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 12:02:31.126953  392749 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 12:02:31.127024  392749 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 12:02:31.130456  392749 start.go:563] Will wait 60s for crictl version
	I0916 12:02:31.130515  392749 ssh_runner.go:195] Run: which crictl
	I0916 12:02:31.134039  392749 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 12:02:31.166783  392749 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 12:02:31.166863  392749 ssh_runner.go:195] Run: crio --version
	I0916 12:02:31.202361  392749 ssh_runner.go:195] Run: crio --version
	I0916 12:02:31.240370  392749 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 12:02:31.241854  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 12:02:31.258991  392749 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 12:02:31.262509  392749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:02:31.272708  392749 kubeadm.go:883] updating cluster {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 12:02:31.272831  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:31.272875  392749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:02:31.336596  392749 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:02:31.336618  392749 crio.go:433] Images already preloaded, skipping extraction
	I0916 12:02:31.336662  392749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:02:31.370353  392749 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:02:31.370403  392749 cache_images.go:84] Images are preloaded, skipping loading
	I0916 12:02:31.370410  392749 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 12:02:31.370494  392749 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-132595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 12:02:31.370555  392749 ssh_runner.go:195] Run: crio config
	I0916 12:02:31.414217  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:31.414235  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:31.414244  392749 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 12:02:31.414263  392749 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-132595 NodeName:embed-certs-132595 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 12:02:31.414385  392749 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-132595"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 12:02:31.414491  392749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 12:02:31.423224  392749 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 12:02:31.423288  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 12:02:31.431649  392749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I0916 12:02:31.448899  392749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 12:02:31.465819  392749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0916 12:02:31.484203  392749 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 12:02:31.487892  392749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:02:31.498931  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:31.578175  392749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:02:31.591266  392749 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595 for IP: 192.168.103.2
	I0916 12:02:31.591291  392749 certs.go:194] generating shared ca certs ...
	I0916 12:02:31.591306  392749 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.591451  392749 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 12:02:31.591500  392749 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 12:02:31.591510  392749 certs.go:256] generating profile certs ...
	I0916 12:02:31.591562  392749 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key
	I0916 12:02:31.591590  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt with IP's: []
	I0916 12:02:31.709220  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt ...
	I0916 12:02:31.709248  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt: {Name:mka1d5a1edf02835642de8bdc842db8cd676a26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.709443  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key ...
	I0916 12:02:31.709455  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key: {Name:mk9ae7714dfa095c3ad43e583257aef75ede0041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.709547  392749 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d
	I0916 12:02:31.709562  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 12:02:32.044005  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d ...
	I0916 12:02:32.044031  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d: {Name:mk6feeeb4fe0f8ff0e129b6995e86e98cd2ff58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.044202  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d ...
	I0916 12:02:32.044216  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d: {Name:mk11b35c16267750006ba91ba79ac0aeb369ed92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.044290  392749 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt
	I0916 12:02:32.044387  392749 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key
	I0916 12:02:32.044449  392749 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key
	I0916 12:02:32.044464  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt with IP's: []
	I0916 12:02:32.194505  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt ...
	I0916 12:02:32.194536  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt: {Name:mk85df2a3dc9e98fc7219fc1ae15551b09a34988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.194715  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key ...
	I0916 12:02:32.194728  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key: {Name:mk563636bd095728c8aa5b89edf7c40089c8fbee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.194890  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 12:02:32.194928  392749 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 12:02:32.194939  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 12:02:32.194961  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 12:02:32.194983  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 12:02:32.195002  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 12:02:32.195037  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:02:32.195649  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 12:02:32.220426  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 12:02:32.243923  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 12:02:32.267291  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 12:02:32.290723  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 12:02:32.316617  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 12:02:32.339515  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 12:02:32.362149  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 12:02:32.385198  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 12:02:32.407529  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 12:02:32.430567  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 12:02:32.454462  392749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 12:02:32.471580  392749 ssh_runner.go:195] Run: openssl version
	I0916 12:02:32.476960  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 12:02:32.485882  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.489223  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.489271  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.495629  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 12:02:32.505521  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 12:02:32.514505  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.517905  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.517965  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.524346  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 12:02:32.533311  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 12:02:32.542142  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.545523  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.545594  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.552365  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 12:02:32.561599  392749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 12:02:32.565017  392749 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 12:02:32.565075  392749 kubeadm.go:392] StartCluster: {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:02:32.565168  392749 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 12:02:32.565223  392749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 12:02:32.600725  392749 cri.go:89] found id: ""
	I0916 12:02:32.600782  392749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 12:02:32.610233  392749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 12:02:32.618479  392749 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 12:02:32.618526  392749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 12:02:32.626556  392749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 12:02:32.626577  392749 kubeadm.go:157] found existing configuration files:
	
	I0916 12:02:32.626634  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 12:02:32.634946  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 12:02:32.635015  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 12:02:32.644145  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 12:02:32.652426  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 12:02:32.652498  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 12:02:32.660703  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 12:02:32.669431  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 12:02:32.669498  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 12:02:32.678101  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 12:02:32.686434  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 12:02:32.686497  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 12:02:32.694571  392749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 12:02:32.733058  392749 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 12:02:32.733397  392749 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 12:02:32.749836  392749 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 12:02:32.749917  392749 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 12:02:32.749956  392749 kubeadm.go:310] OS: Linux
	I0916 12:02:32.750007  392749 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 12:02:32.750062  392749 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 12:02:32.750114  392749 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 12:02:32.750170  392749 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 12:02:32.750227  392749 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 12:02:32.750313  392749 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 12:02:32.750390  392749 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 12:02:32.750464  392749 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 12:02:32.750559  392749 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 12:02:32.804063  392749 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 12:02:32.804209  392749 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 12:02:32.804363  392749 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 12:02:32.810567  392749 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 12:02:32.813202  392749 out.go:235]   - Generating certificates and keys ...
	I0916 12:02:32.813305  392749 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 12:02:32.813417  392749 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 12:02:33.063616  392749 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 12:02:33.364335  392749 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 12:02:33.538855  392749 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 12:02:33.629054  392749 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 12:02:33.726242  392749 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 12:02:33.726358  392749 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-132595 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 12:02:33.819559  392749 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 12:02:33.819747  392749 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-132595 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 12:02:34.040985  392749 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 12:02:34.313148  392749 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 12:02:34.371964  392749 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 12:02:34.372034  392749 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 12:02:34.533586  392749 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 12:02:34.613255  392749 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 12:02:34.821003  392749 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 12:02:35.043370  392749 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 12:02:35.119304  392749 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 12:02:35.119834  392749 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 12:02:35.122405  392749 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 12:02:35.124431  392749 out.go:235]   - Booting up control plane ...
	I0916 12:02:35.124649  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 12:02:35.124761  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 12:02:35.125024  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 12:02:35.134352  392749 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 12:02:35.139484  392749 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 12:02:35.139586  392749 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 12:02:35.219432  392749 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 12:02:35.219608  392749 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 12:02:35.721291  392749 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.804738ms
	I0916 12:02:35.721439  392749 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 12:02:40.722572  392749 kubeadm.go:310] [api-check] The API server is healthy after 5.001398972s
	I0916 12:02:40.734247  392749 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 12:02:40.746322  392749 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 12:02:40.764937  392749 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 12:02:40.765155  392749 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-132595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 12:02:40.772968  392749 kubeadm.go:310] [bootstrap-token] Using token: 7gckm0.d0i7kpdezz05toci
	I0916 12:02:40.774166  392749 out.go:235]   - Configuring RBAC rules ...
	I0916 12:02:40.774305  392749 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 12:02:40.777587  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 12:02:40.783580  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 12:02:40.786051  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 12:02:40.788495  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 12:02:40.790852  392749 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 12:02:41.130233  392749 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 12:02:41.550016  392749 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 12:02:42.129585  392749 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 12:02:42.130524  392749 kubeadm.go:310] 
	I0916 12:02:42.130589  392749 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 12:02:42.130596  392749 kubeadm.go:310] 
	I0916 12:02:42.130670  392749 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 12:02:42.130680  392749 kubeadm.go:310] 
	I0916 12:02:42.130716  392749 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 12:02:42.130790  392749 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 12:02:42.130890  392749 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 12:02:42.130913  392749 kubeadm.go:310] 
	I0916 12:02:42.131015  392749 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 12:02:42.131033  392749 kubeadm.go:310] 
	I0916 12:02:42.131106  392749 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 12:02:42.131126  392749 kubeadm.go:310] 
	I0916 12:02:42.131207  392749 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 12:02:42.131315  392749 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 12:02:42.131419  392749 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 12:02:42.131430  392749 kubeadm.go:310] 
	I0916 12:02:42.131547  392749 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 12:02:42.131658  392749 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 12:02:42.131670  392749 kubeadm.go:310] 
	I0916 12:02:42.131792  392749 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7gckm0.d0i7kpdezz05toci \
	I0916 12:02:42.131887  392749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 12:02:42.131918  392749 kubeadm.go:310] 	--control-plane 
	I0916 12:02:42.131928  392749 kubeadm.go:310] 
	I0916 12:02:42.132048  392749 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 12:02:42.132058  392749 kubeadm.go:310] 
	I0916 12:02:42.132192  392749 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7gckm0.d0i7kpdezz05toci \
	I0916 12:02:42.132353  392749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 12:02:42.135019  392749 kubeadm.go:310] W0916 12:02:32.730384    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:02:42.135280  392749 kubeadm.go:310] W0916 12:02:32.731017    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:02:42.135512  392749 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 12:02:42.135628  392749 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 12:02:42.135653  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:42.135665  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:42.138549  392749 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 12:02:42.139971  392749 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 12:02:42.143956  392749 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 12:02:42.143980  392749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 12:02:42.161613  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 12:02:42.354438  392749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 12:02:42.354510  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:42.354533  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-132595 minikube.k8s.io/updated_at=2024_09_16T12_02_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=embed-certs-132595 minikube.k8s.io/primary=true
	I0916 12:02:42.504120  392749 ops.go:34] apiserver oom_adj: -16
	I0916 12:02:42.504136  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:43.004526  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:43.504277  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:44.004404  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:44.505245  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:45.004800  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:45.505119  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.004265  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.505238  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.574458  392749 kubeadm.go:1113] duration metric: took 4.220005009s to wait for elevateKubeSystemPrivileges
	I0916 12:02:46.574491  392749 kubeadm.go:394] duration metric: took 14.009421351s to StartCluster
	I0916 12:02:46.574511  392749 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:46.574575  392749 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:02:46.576211  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:46.576447  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 12:02:46.576447  392749 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:02:46.576532  392749 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 12:02:46.576624  392749 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132595"
	I0916 12:02:46.576643  392749 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132595"
	I0916 12:02:46.576668  392749 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:46.576688  392749 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132595"
	I0916 12:02:46.576656  392749 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-132595"
	I0916 12:02:46.576774  392749 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:02:46.577752  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.577919  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.579915  392749 out.go:177] * Verifying Kubernetes components...
	I0916 12:02:46.581417  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:46.602279  392749 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 12:02:46.603012  392749 addons.go:234] Setting addon default-storageclass=true in "embed-certs-132595"
	I0916 12:02:46.603054  392749 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:02:46.603533  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.603631  392749 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:02:46.603648  392749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 12:02:46.603704  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:46.623554  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:46.627265  392749 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 12:02:46.627294  392749 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 12:02:46.627365  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:46.652386  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:46.708797  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 12:02:46.808404  392749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:02:46.819270  392749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:02:47.003989  392749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 12:02:47.322159  392749 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 12:02:47.325904  392749 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:02:47.699966  392749 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 12:02:47.701431  392749 addons.go:510] duration metric: took 1.124891375s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 12:02:47.827676  392749 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-132595" context rescaled to 1 replicas
	I0916 12:02:49.329713  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:51.829363  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:53.829520  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:55.843996  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:58.329318  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:00.329424  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:02.329815  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:04.829037  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:06.829132  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:08.829193  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:11.329223  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:13.829206  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:16.329632  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:18.829227  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:21.329475  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:23.829047  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:25.829344  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:27.830100  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:28.329181  392749 node_ready.go:49] node "embed-certs-132595" has status "Ready":"True"
	I0916 12:03:28.329205  392749 node_ready.go:38] duration metric: took 41.003271788s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:03:28.329215  392749 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:03:28.335458  392749 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.841863  392749 pod_ready.go:93] pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.841891  392749 pod_ready.go:82] duration metric: took 1.506402305s for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.841902  392749 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.846650  392749 pod_ready.go:93] pod "etcd-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.846672  392749 pod_ready.go:82] duration metric: took 4.765058ms for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.846685  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.851324  392749 pod_ready.go:93] pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.851348  392749 pod_ready.go:82] duration metric: took 4.655631ms for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.851361  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.855398  392749 pod_ready.go:93] pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.855418  392749 pod_ready.go:82] duration metric: took 4.049899ms for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.855427  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.929577  392749 pod_ready.go:93] pod "kube-proxy-5jjq9" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.929598  392749 pod_ready.go:82] duration metric: took 74.164746ms for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.929610  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:30.329892  392749 pod_ready.go:93] pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:30.329915  392749 pod_ready.go:82] duration metric: took 400.298548ms for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:30.329926  392749 pod_ready.go:39] duration metric: took 2.000698892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:03:30.329943  392749 api_server.go:52] waiting for apiserver process to appear ...
	I0916 12:03:30.329991  392749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 12:03:30.341465  392749 api_server.go:72] duration metric: took 43.764981639s to wait for apiserver process to appear ...
	I0916 12:03:30.341495  392749 api_server.go:88] waiting for apiserver healthz status ...
	I0916 12:03:30.341517  392749 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 12:03:30.346926  392749 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 12:03:30.347857  392749 api_server.go:141] control plane version: v1.31.1
	I0916 12:03:30.347882  392749 api_server.go:131] duration metric: took 6.380265ms to wait for apiserver health ...
	I0916 12:03:30.347891  392749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 12:03:30.533749  392749 system_pods.go:59] 8 kube-system pods found
	I0916 12:03:30.533781  392749 system_pods.go:61] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:03:30.533787  392749 system_pods.go:61] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:03:30.533791  392749 system_pods.go:61] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:03:30.533795  392749 system_pods.go:61] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:03:30.533798  392749 system_pods.go:61] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:03:30.533801  392749 system_pods.go:61] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:03:30.533805  392749 system_pods.go:61] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:03:30.533808  392749 system_pods.go:61] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:03:30.533814  392749 system_pods.go:74] duration metric: took 185.917389ms to wait for pod list to return data ...
	I0916 12:03:30.533821  392749 default_sa.go:34] waiting for default service account to be created ...
	I0916 12:03:30.730240  392749 default_sa.go:45] found service account: "default"
	I0916 12:03:30.730269  392749 default_sa.go:55] duration metric: took 196.441382ms for default service account to be created ...
	I0916 12:03:30.730278  392749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 12:03:30.932106  392749 system_pods.go:86] 8 kube-system pods found
	I0916 12:03:30.932141  392749 system_pods.go:89] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:03:30.932149  392749 system_pods.go:89] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:03:30.932155  392749 system_pods.go:89] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:03:30.932160  392749 system_pods.go:89] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:03:30.932165  392749 system_pods.go:89] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:03:30.932170  392749 system_pods.go:89] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:03:30.932175  392749 system_pods.go:89] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:03:30.932180  392749 system_pods.go:89] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:03:30.932189  392749 system_pods.go:126] duration metric: took 201.903374ms to wait for k8s-apps to be running ...
	I0916 12:03:30.932199  392749 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 12:03:30.932250  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 12:03:30.943724  392749 system_svc.go:56] duration metric: took 11.513209ms WaitForService to wait for kubelet
	I0916 12:03:30.943753  392749 kubeadm.go:582] duration metric: took 44.367276865s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:03:30.943776  392749 node_conditions.go:102] verifying NodePressure condition ...
	I0916 12:03:31.130459  392749 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 12:03:31.130490  392749 node_conditions.go:123] node cpu capacity is 8
	I0916 12:03:31.130506  392749 node_conditions.go:105] duration metric: took 186.72463ms to run NodePressure ...
	I0916 12:03:31.130519  392749 start.go:241] waiting for startup goroutines ...
	I0916 12:03:31.130528  392749 start.go:246] waiting for cluster config update ...
	I0916 12:03:31.130542  392749 start.go:255] writing updated cluster config ...
	I0916 12:03:31.130846  392749 ssh_runner.go:195] Run: rm -f paused
	I0916 12:03:31.136980  392749 out.go:177] * Done! kubectl is now configured to use "embed-certs-132595" cluster and "default" namespace by default
	E0916 12:03:31.138336  392749 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.568561957Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=86d2dd83-a69f-4100-8a41-c43a51cc8aed name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.570462648Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-lmhpj Namespace:kube-system ID:0699e4a1527a5e22ec2c5a3eae9411ebd5f65603f653f4ead716e20f5f2ea774 UID:dec7e28f-bb5b-4238-abf8-a17607466015 NetNS:/var/run/netns/f5817722-6ecf-471c-a9b7-43486a0002b3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.570588826Z" level=info msg="Checking pod kube-system_coredns-7c65d6cfc9-lmhpj for CNI network kindnet (type=ptp)"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.572423980Z" level=info msg="Ran pod sandbox e29106fe85d76fa4de619461e9d9494576a9dccd6c91884e0f1da2ca7d20785d with infra container: kube-system/storage-provisioner/POD" id=86d2dd83-a69f-4100-8a41-c43a51cc8aed name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.573376430Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb33efd2-983e-4902-bc01-08f82368b237 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.573461657Z" level=info msg="Ran pod sandbox 0699e4a1527a5e22ec2c5a3eae9411ebd5f65603f653f4ead716e20f5f2ea774 with infra container: kube-system/coredns-7c65d6cfc9-lmhpj/POD" id=cbe6f7c1-4df2-4f71-94f9-b99abe59bc6d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.573660173Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bb33efd2-983e-4902-bc01-08f82368b237 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574348611Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=501e55a1-9662-4590-b631-8ced001643cb name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574410998Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=a456219a-02a2-49f9-9936-650df962db25 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574580246Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=a456219a-02a2-49f9-9936-650df962db25 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.574605317Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=501e55a1-9662-4590-b631-8ced001643cb name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575178636Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=c366e22a-5e1f-491e-9035-6fd374bfe7b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575303969Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=59687f8b-e7c9-4348-8d28-901638ae293b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575368833Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c366e22a-5e1f-491e-9035-6fd374bfe7b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575400341Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.576002923Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-lmhpj/coredns" id=8d08e494-ec0b-4b6d-be32-c6718a9b0d95 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.576092718Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.586948820Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e375941d0176fe56097181bec36e35d52a5a7d8cd1d147901099904551bc4537/merged/etc/passwd: no such file or directory"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.586986448Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e375941d0176fe56097181bec36e35d52a5a7d8cd1d147901099904551bc4537/merged/etc/group: no such file or directory"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.628157312Z" level=info msg="Created container 04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02: kube-system/storage-provisioner/storage-provisioner" id=59687f8b-e7c9-4348-8d28-901638ae293b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.628774234Z" level=info msg="Starting container: 04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02" id=c6eae36f-652b-4bb4-ba8e-a3d4aef3a5a8 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.636335332Z" level=info msg="Started container" PID=2235 containerID=04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02 description=kube-system/storage-provisioner/storage-provisioner id=c6eae36f-652b-4bb4-ba8e-a3d4aef3a5a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e29106fe85d76fa4de619461e9d9494576a9dccd6c91884e0f1da2ca7d20785d
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.638679752Z" level=info msg="Created container 6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454: kube-system/coredns-7c65d6cfc9-lmhpj/coredns" id=8d08e494-ec0b-4b6d-be32-c6718a9b0d95 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.639344270Z" level=info msg="Starting container: 6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454" id=08658758-7578-4590-a126-dd9c8a60b47c name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.646205266Z" level=info msg="Started container" PID=2253 containerID=6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454 description=kube-system/coredns-7c65d6cfc9-lmhpj/coredns id=08658758-7578-4590-a126-dd9c8a60b47c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0699e4a1527a5e22ec2c5a3eae9411ebd5f65603f653f4ead716e20f5f2ea774
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6198a816b1cdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   5 seconds ago       Running             coredns                   0                   0699e4a1527a5       coredns-7c65d6cfc9-lmhpj
	04bb82a52f980       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 seconds ago       Running             storage-provisioner       0                   e29106fe85d76       storage-provisioner
	7e5dd3f1d7192       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   46 seconds ago      Running             kindnet-cni               0                   71c9e456d6bf1       kindnet-s4vkq
	044a317804ef8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   46 seconds ago      Running             kube-proxy                0                   d7e6cbd74393e       kube-proxy-5jjq9
	aa00ff4074279       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   57 seconds ago      Running             etcd                      0                   079241af2ce23       etcd-embed-certs-132595
	43acde1f85a74       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   57 seconds ago      Running             kube-controller-manager   0                   9e06df2fe014b       kube-controller-manager-embed-certs-132595
	2e260ff8685de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   57 seconds ago      Running             kube-scheduler            0                   f4b76295d450a       kube-scheduler-embed-certs-132595
	09176dad2cb1c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   57 seconds ago      Running             kube-apiserver            0                   06c0e46f86ec9       kube-apiserver-embed-certs-132595
	
	
	==> coredns [6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43571 - 19451 "HINFO IN 8873162753112370163.8194975584790532838. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011764927s
	
	
	==> describe nodes <==
	Name:               embed-certs-132595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-132595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-132595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T12_02_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 12:02:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-132595
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 12:03:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-132595
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdbbf6049dff4c2fbfb05ee6d4e44c79
	  System UUID:                ac9bc1b7-26e7-4faa-ad97-c61b5564343d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-lmhpj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     48s
	  kube-system                 etcd-embed-certs-132595                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         53s
	  kube-system                 kindnet-s4vkq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      48s
	  kube-system                 kube-apiserver-embed-certs-132595             250m (3%)     0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-controller-manager-embed-certs-132595    200m (2%)     0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-proxy-5jjq9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-embed-certs-132595             100m (1%)     0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 46s                kube-proxy       
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x7 over 59s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 53s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  53s                kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s                kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s                kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node embed-certs-132595 event: Registered Node embed-certs-132595 in Controller
	  Normal   NodeReady                6s                 kubelet          Node embed-certs-132595 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.954619] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.059994] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000007] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +6.207537] net_ratelimit: 5 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +8.191403] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.003944] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [aa00ff407427948b5e089e635cd56649f686d7a6c9e475586db68aae2101c56f] <==
	{"level":"info","ts":"2024-09-16T12:02:36.601518Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T12:02:36.601616Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:02:36.601663Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:02:36.601767Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T12:02:36.601799Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T12:02:37.030837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.032010Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.032867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:02:37.032863Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-132595 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T12:02:37.032902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:02:37.033174Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T12:02:37.033206Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T12:02:37.033382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.033486Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.033512Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.034156Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:02:37.034157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:02:37.035300Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T12:02:37.035303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:03:34 up  1:45,  0 users,  load average: 1.47, 1.14, 0.98
	Linux embed-certs-132595 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7e5dd3f1d71925f826db082ad675d3101e7aae3acce70fd7b76a514c9a89f6fd] <==
	W0916 12:03:18.015731       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015833       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015860       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015837       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:18.015898       1 trace.go:236] Trace[31706214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[31706214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[31706214]: [30.001681318s] [30.001681318s] END
	I0916 12:03:18.015898       1 trace.go:236] Trace[592354911]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[592354911]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[592354911]: [30.001620265s] [30.001620265s] END
	E0916 12:03:18.015924       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 12:03:18.015923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:18.015935       1 trace.go:236] Trace[1491040377]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[1491040377]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[1491040377]: [30.001712492s] [30.001712492s] END
	I0916 12:03:18.015935       1 trace.go:236] Trace[2145919099]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[2145919099]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[2145919099]: [30.001694697s] [30.001694697s] END
	E0916 12:03:18.015954       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 12:03:18.015960       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:19.315171       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 12:03:19.315197       1 metrics.go:61] Registering metrics
	I0916 12:03:19.315264       1 controller.go:374] Syncing nftables rules
	I0916 12:03:28.021447       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:03:28.021489       1 main.go:299] handling current node
	
	
	==> kube-apiserver [09176dad2cb1c2af9cd3430fb4f7fca0bd2ff37e3126706cf8504f9f1f4f54cc] <==
	I0916 12:02:39.038473       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 12:02:39.038479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 12:02:39.038485       1 cache.go:39] Caches are synced for autoregister controller
	I0916 12:02:39.093479       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 12:02:39.093511       1 policy_source.go:224] refreshing policies
	I0916 12:02:39.093524       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0916 12:02:39.096961       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0916 12:02:39.097047       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 12:02:39.140068       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 12:02:39.299961       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 12:02:39.942205       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 12:02:39.946102       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 12:02:39.946121       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 12:02:40.388883       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 12:02:40.428179       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 12:02:40.547548       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 12:02:40.553786       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 12:02:40.554918       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 12:02:40.559315       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 12:02:41.002384       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 12:02:41.536771       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 12:02:41.548758       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 12:02:41.557550       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 12:02:46.656584       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 12:02:46.807047       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [43acde1f85a74d4bd7d60bb7ed1dbd6e59079441a502cee1be854f6abe5e35b6] <==
	I0916 12:02:46.002200       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-132595"
	I0916 12:02:46.002244       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0916 12:02:46.003457       1 shared_informer.go:320] Caches are synced for job
	I0916 12:02:46.004658       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 12:02:46.004668       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 12:02:46.005761       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 12:02:46.362324       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 12:02:46.362359       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 12:02:46.373448       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 12:02:46.510926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:02:47.005464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="311.027327ms"
	I0916 12:02:47.016485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.96288ms"
	I0916 12:02:47.016598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.136µs"
	I0916 12:02:47.016676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="46.065µs"
	I0916 12:02:47.095940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.503µs"
	I0916 12:02:47.497738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.189918ms"
	I0916 12:02:47.507073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.188248ms"
	I0916 12:02:47.507293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.054µs"
	I0916 12:03:28.228034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:03:28.237164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:03:28.243973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="113.42µs"
	I0916 12:03:28.254166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.618µs"
	I0916 12:03:29.601516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.101551ms"
	I0916 12:03:29.601730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.683µs"
	I0916 12:03:31.009954       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [044a317804ef8bd211cafdc21ae7bf14d25d5e48ffbf28d2a623796fc0f3bec3] <==
	I0916 12:02:47.629252       1 server_linux.go:66] "Using iptables proxy"
	I0916 12:02:47.753705       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 12:02:47.753773       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 12:02:47.773672       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 12:02:47.773734       1 server_linux.go:169] "Using iptables Proxier"
	I0916 12:02:47.775582       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 12:02:47.775952       1 server.go:483] "Version info" version="v1.31.1"
	I0916 12:02:47.775990       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 12:02:47.778496       1 config.go:105] "Starting endpoint slice config controller"
	I0916 12:02:47.778576       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 12:02:47.778508       1 config.go:328] "Starting node config controller"
	I0916 12:02:47.778658       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 12:02:47.778538       1 config.go:199] "Starting service config controller"
	I0916 12:02:47.778717       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 12:02:47.879076       1 shared_informer.go:320] Caches are synced for service config
	I0916 12:02:47.879101       1 shared_informer.go:320] Caches are synced for node config
	I0916 12:02:47.879085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2e260ff8685de88af344ef117d8cbfa1ff17b511040b27ce76779a023b1eaa4d] <==
	W0916 12:02:39.021480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 12:02:39.021498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.021539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 12:02:39.021562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.021575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 12:02:39.021601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.859282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 12:02:39.859324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.895908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 12:02:39.895947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.912462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 12:02:39.912503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.947122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 12:02:39.947168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.965072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 12:02:39.965130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.967121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 12:02:39.967256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.977801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 12:02:39.977847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:40.200165       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 12:02:40.200204       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 12:02:40.236012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 12:02:40.236065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 12:02:42.917675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995279    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6lb\" (UniqueName: \"kubernetes.io/projected/da63c6b0-19b1-4ab0-abc4-ac2b785e8e88-kube-api-access-ml6lb\") pod \"kube-proxy-5jjq9\" (UID: \"da63c6b0-19b1-4ab0-abc4-ac2b785e8e88\") " pod="kube-system/kube-proxy-5jjq9"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995363    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da63c6b0-19b1-4ab0-abc4-ac2b785e8e88-xtables-lock\") pod \"kube-proxy-5jjq9\" (UID: \"da63c6b0-19b1-4ab0-abc4-ac2b785e8e88\") " pod="kube-system/kube-proxy-5jjq9"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995413    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a7383ab-18b0-4118-9810-ff1cbbdd9ecf-xtables-lock\") pod \"kindnet-s4vkq\" (UID: \"8a7383ab-18b0-4118-9810-ff1cbbdd9ecf\") " pod="kube-system/kindnet-s4vkq"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995449    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da63c6b0-19b1-4ab0-abc4-ac2b785e8e88-lib-modules\") pod \"kube-proxy-5jjq9\" (UID: \"da63c6b0-19b1-4ab0-abc4-ac2b785e8e88\") " pod="kube-system/kube-proxy-5jjq9"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995472    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8a7383ab-18b0-4118-9810-ff1cbbdd9ecf-cni-cfg\") pod \"kindnet-s4vkq\" (UID: \"8a7383ab-18b0-4118-9810-ff1cbbdd9ecf\") " pod="kube-system/kindnet-s4vkq"
	Sep 16 12:02:46 embed-certs-132595 kubelet[1652]: I0916 12:02:46.995497    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nf57\" (UniqueName: \"kubernetes.io/projected/8a7383ab-18b0-4118-9810-ff1cbbdd9ecf-kube-api-access-9nf57\") pod \"kindnet-s4vkq\" (UID: \"8a7383ab-18b0-4118-9810-ff1cbbdd9ecf\") " pod="kube-system/kindnet-s4vkq"
	Sep 16 12:02:47 embed-certs-132595 kubelet[1652]: I0916 12:02:47.103151    1652 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 12:02:48 embed-certs-132595 kubelet[1652]: I0916 12:02:48.507465    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s4vkq" podStartSLOduration=2.507441772 podStartE2EDuration="2.507441772s" podCreationTimestamp="2024-09-16 12:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:02:48.507392289 +0000 UTC m=+7.183060533" watchObservedRunningTime="2024-09-16 12:02:48.507441772 +0000 UTC m=+7.183110017"
	Sep 16 12:02:48 embed-certs-132595 kubelet[1652]: I0916 12:02:48.517119    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jjq9" podStartSLOduration=2.5170927770000002 podStartE2EDuration="2.517092777s" podCreationTimestamp="2024-09-16 12:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:02:48.517001015 +0000 UTC m=+7.192669259" watchObservedRunningTime="2024-09-16 12:02:48.517092777 +0000 UTC m=+7.192761020"
	Sep 16 12:02:51 embed-certs-132595 kubelet[1652]: E0916 12:02:51.434502    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488171434318936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:02:51 embed-certs-132595 kubelet[1652]: E0916 12:02:51.434548    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488171434318936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:01 embed-certs-132595 kubelet[1652]: E0916 12:03:01.435826    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488181435634385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:01 embed-certs-132595 kubelet[1652]: E0916 12:03:01.435872    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488181435634385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:11 embed-certs-132595 kubelet[1652]: E0916 12:03:11.437372    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488191437189942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:11 embed-certs-132595 kubelet[1652]: E0916 12:03:11.437413    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488191437189942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:21 embed-certs-132595 kubelet[1652]: E0916 12:03:21.438567    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488201438406245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:21 embed-certs-132595 kubelet[1652]: E0916 12:03:21.438612    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488201438406245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.218921    1652 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387462    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec7e28f-bb5b-4238-abf8-a17607466015-config-volume\") pod \"coredns-7c65d6cfc9-lmhpj\" (UID: \"dec7e28f-bb5b-4238-abf8-a17607466015\") " pod="kube-system/coredns-7c65d6cfc9-lmhpj"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387509    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2qs\" (UniqueName: \"kubernetes.io/projected/dec7e28f-bb5b-4238-abf8-a17607466015-kube-api-access-qm2qs\") pod \"coredns-7c65d6cfc9-lmhpj\" (UID: \"dec7e28f-bb5b-4238-abf8-a17607466015\") " pod="kube-system/coredns-7c65d6cfc9-lmhpj"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387532    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b94fecd1-4b72-474b-9296-fb5c86912f64-tmp\") pod \"storage-provisioner\" (UID: \"b94fecd1-4b72-474b-9296-fb5c86912f64\") " pod="kube-system/storage-provisioner"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387546    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w5lv\" (UniqueName: \"kubernetes.io/projected/b94fecd1-4b72-474b-9296-fb5c86912f64-kube-api-access-2w5lv\") pod \"storage-provisioner\" (UID: \"b94fecd1-4b72-474b-9296-fb5c86912f64\") " pod="kube-system/storage-provisioner"
	Sep 16 12:03:29 embed-certs-132595 kubelet[1652]: I0916 12:03:29.584364    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.584345227 podStartE2EDuration="42.584345227s" podCreationTimestamp="2024-09-16 12:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:03:29.58423399 +0000 UTC m=+48.259902234" watchObservedRunningTime="2024-09-16 12:03:29.584345227 +0000 UTC m=+48.260013470"
	Sep 16 12:03:31 embed-certs-132595 kubelet[1652]: E0916 12:03:31.439757    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488211439535902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:31 embed-certs-132595 kubelet[1652]: E0916 12:03:31.439799    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488211439535902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02] <==
	I0916 12:03:28.648757       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 12:03:28.659095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 12:03:28.659135       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 12:03:28.701058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 12:03:28.701294       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794!
	I0916 12:03:28.701548       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"145c877d-a7a1-47fc-887a-f3ff6cf439ce", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794 became leader
	I0916 12:03:28.801622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (572.377µs)
helpers_test.go:263: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (3.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-132595 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-132595 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-132595 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (595.97µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-132595 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-132595
helpers_test.go:235: (dbg) docker inspect embed-certs-132595:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95",
	        "Created": "2024-09-16T12:02:27.844570227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393450,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T12:02:27.964272788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hosts",
	        "LogPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95-json.log",
	        "Name": "/embed-certs-132595",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-132595:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-132595",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-132595",
	                "Source": "/var/lib/docker/volumes/embed-certs-132595/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-132595",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-132595",
	                "name.minikube.sigs.k8s.io": "embed-certs-132595",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8051876631e629be3d63d04a25b08c24b1f81adc45f3ad239f7bc136e91b56ad",
	            "SandboxKey": "/var/run/docker/netns/8051876631e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-132595": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2bfc3c9091b0bc051827133f808c3cb85965e63d2bf1e9667fc1a6a160dc08f4",
	                    "EndpointID": "2e4a82502e88e3414290611bf291eaf399e6bd167c079853617718aca5cc9c76",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-132595",
	                        "9f079caa1423"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25: (1.123639726s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| delete  | -p no-preload-179932                                   | no-preload-179932            | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-451928  | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-451928       | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-451928                           | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-483277             | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-483277                  | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-483277 image list                           | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| start   | -p embed-certs-132595                                  | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-132595            | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:03 UTC | 16 Sep 24 12:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 12:02:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 12:02:22.316707  392749 out.go:345] Setting OutFile to fd 1 ...
	I0916 12:02:22.316980  392749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:02:22.316990  392749 out.go:358] Setting ErrFile to fd 2...
	I0916 12:02:22.316994  392749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:02:22.317211  392749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 12:02:22.317988  392749 out.go:352] Setting JSON to false
	I0916 12:02:22.319189  392749 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6282,"bootTime":1726481860,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 12:02:22.319253  392749 start.go:139] virtualization: kvm guest
	I0916 12:02:22.321724  392749 out.go:177] * [embed-certs-132595] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 12:02:22.323580  392749 notify.go:220] Checking for updates...
	I0916 12:02:22.323619  392749 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 12:02:22.325184  392749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 12:02:22.326831  392749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:02:22.328293  392749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 12:02:22.329741  392749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 12:02:22.331375  392749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 12:02:22.333444  392749 config.go:182] Loaded profile config "bridge-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333594  392749 config.go:182] Loaded profile config "custom-flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333730  392749 config.go:182] Loaded profile config "flannel-838467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:22.333861  392749 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 12:02:22.357827  392749 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 12:02:22.357973  392749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:02:22.415015  392749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:02:22.404189354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:02:22.415142  392749 docker.go:318] overlay module found
	I0916 12:02:22.418459  392749 out.go:177] * Using the docker driver based on user configuration
	I0916 12:02:22.420009  392749 start.go:297] selected driver: docker
	I0916 12:02:22.420030  392749 start.go:901] validating driver "docker" against <nil>
	I0916 12:02:22.420041  392749 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 12:02:22.420849  392749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:02:22.481968  392749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:02:22.472332251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:02:22.482174  392749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 12:02:22.482464  392749 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:02:22.484723  392749 out.go:177] * Using Docker driver with root privileges
	I0916 12:02:22.486426  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:22.486474  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:22.486482  392749 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 12:02:22.486556  392749 start.go:340] cluster config:
	{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:02:22.488572  392749 out.go:177] * Starting "embed-certs-132595" primary control-plane node in "embed-certs-132595" cluster
	I0916 12:02:22.490260  392749 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 12:02:22.492012  392749 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 12:02:22.493615  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:22.493670  392749 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 12:02:22.493684  392749 cache.go:56] Caching tarball of preloaded images
	I0916 12:02:22.493725  392749 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 12:02:22.493780  392749 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 12:02:22.493797  392749 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 12:02:22.493914  392749 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	I0916 12:02:22.493936  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json: {Name:mk85e2df12eb3418e581ab1558bdddacab4821d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 12:02:22.516611  392749 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 12:02:22.516634  392749 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 12:02:22.516701  392749 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 12:02:22.516717  392749 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 12:02:22.516721  392749 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 12:02:22.516728  392749 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 12:02:22.516735  392749 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 12:02:22.577454  392749 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 12:02:22.577503  392749 cache.go:194] Successfully downloaded all kic artifacts
	I0916 12:02:22.577543  392749 start.go:360] acquireMachinesLock for embed-certs-132595: {Name:mk90285717afa09eeba6eb1eaf13ca243fd0e8ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:02:22.577688  392749 start.go:364] duration metric: took 123.446µs to acquireMachinesLock for "embed-certs-132595"
	I0916 12:02:22.577716  392749 start.go:93] Provisioning new machine with config: &{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:02:22.577790  392749 start.go:125] createHost starting for "" (driver="docker")
	I0916 12:02:22.580825  392749 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 12:02:22.581158  392749 start.go:159] libmachine.API.Create for "embed-certs-132595" (driver="docker")
	I0916 12:02:22.581194  392749 client.go:168] LocalClient.Create starting
	I0916 12:02:22.581279  392749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem
	I0916 12:02:22.581315  392749 main.go:141] libmachine: Decoding PEM data...
	I0916 12:02:22.581364  392749 main.go:141] libmachine: Parsing certificate...
	I0916 12:02:22.581424  392749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem
	I0916 12:02:22.581453  392749 main.go:141] libmachine: Decoding PEM data...
	I0916 12:02:22.581469  392749 main.go:141] libmachine: Parsing certificate...
	I0916 12:02:22.581917  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 12:02:22.601058  392749 cli_runner.go:211] docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 12:02:22.601120  392749 network_create.go:284] running [docker network inspect embed-certs-132595] to gather additional debugging logs...
	I0916 12:02:22.601136  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595
	W0916 12:02:22.619588  392749 cli_runner.go:211] docker network inspect embed-certs-132595 returned with exit code 1
	I0916 12:02:22.619629  392749 network_create.go:287] error running [docker network inspect embed-certs-132595]: docker network inspect embed-certs-132595: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-132595 not found
	I0916 12:02:22.619641  392749 network_create.go:289] output of [docker network inspect embed-certs-132595]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-132595 not found
	
	** /stderr **
	I0916 12:02:22.619744  392749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 12:02:22.638437  392749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1162a04f8fb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:9f:3b:1f} reservation:<nil>}
	I0916 12:02:22.639338  392749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-38a96cee1ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6e:95:c7:eb} reservation:<nil>}
	I0916 12:02:22.640220  392749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a5a173559814 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:1c:76:9a} reservation:<nil>}
	I0916 12:02:22.641011  392749 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-684fe62dce2f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:74:73:9a:d9} reservation:<nil>}
	I0916 12:02:22.641944  392749 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-78c9581b9c59 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:f5:47} reservation:<nil>}
	I0916 12:02:22.642797  392749 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f009eba0c78f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:82:cf:c3:8d} reservation:<nil>}
	I0916 12:02:22.643883  392749 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023ed510}
	I0916 12:02:22.643904  392749 network_create.go:124] attempt to create docker network embed-certs-132595 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 12:02:22.643965  392749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-132595 embed-certs-132595
	I0916 12:02:22.717370  392749 network_create.go:108] docker network embed-certs-132595 192.168.103.0/24 created
	I0916 12:02:22.717419  392749 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-132595" container
	I0916 12:02:22.717475  392749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 12:02:22.739425  392749 cli_runner.go:164] Run: docker volume create embed-certs-132595 --label name.minikube.sigs.k8s.io=embed-certs-132595 --label created_by.minikube.sigs.k8s.io=true
	I0916 12:02:22.758826  392749 oci.go:103] Successfully created a docker volume embed-certs-132595
	I0916 12:02:22.758921  392749 cli_runner.go:164] Run: docker run --rm --name embed-certs-132595-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-132595 --entrypoint /usr/bin/test -v embed-certs-132595:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 12:02:23.286517  392749 oci.go:107] Successfully prepared a docker volume embed-certs-132595
	I0916 12:02:23.286582  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:23.286608  392749 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 12:02:23.286686  392749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-132595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 12:02:27.777252  392749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-132595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490517682s)
	I0916 12:02:27.777293  392749 kic.go:203] duration metric: took 4.490683033s to extract preloaded images to volume ...
	W0916 12:02:27.777479  392749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 12:02:27.777606  392749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 12:02:27.828245  392749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-132595 --name embed-certs-132595 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-132595 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-132595 --network embed-certs-132595 --ip 192.168.103.2 --volume embed-certs-132595:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 12:02:28.129271  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Running}}
	I0916 12:02:28.148758  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.168574  392749 cli_runner.go:164] Run: docker exec embed-certs-132595 stat /var/lib/dpkg/alternatives/iptables
	I0916 12:02:28.214356  392749 oci.go:144] the created container "embed-certs-132595" has a running status.
	I0916 12:02:28.214398  392749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa...
	I0916 12:02:28.579373  392749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 12:02:28.600739  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.623045  392749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 12:02:28.623068  392749 kic_runner.go:114] Args: [docker exec --privileged embed-certs-132595 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 12:02:28.687280  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:28.707892  392749 machine.go:93] provisionDockerMachine start ...
	I0916 12:02:28.707978  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:28.730282  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:28.730549  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:28.730566  392749 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 12:02:28.864997  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:02:28.865036  392749 ubuntu.go:169] provisioning hostname "embed-certs-132595"
	I0916 12:02:28.865105  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:28.884140  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:28.884312  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:28.884326  392749 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-132595 && echo "embed-certs-132595" | sudo tee /etc/hostname
	I0916 12:02:29.033007  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:02:29.033095  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.051460  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:29.051736  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:29.051767  392749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-132595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-132595/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-132595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 12:02:29.185811  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 12:02:29.185838  392749 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 12:02:29.185872  392749 ubuntu.go:177] setting up certificates
	I0916 12:02:29.185882  392749 provision.go:84] configureAuth start
	I0916 12:02:29.185932  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:29.205104  392749 provision.go:143] copyHostCerts
	I0916 12:02:29.205177  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 12:02:29.205191  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 12:02:29.205266  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 12:02:29.205379  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 12:02:29.205393  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 12:02:29.205443  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 12:02:29.205574  392749 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 12:02:29.205591  392749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 12:02:29.205628  392749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 12:02:29.205725  392749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.embed-certs-132595 san=[127.0.0.1 192.168.103.2 embed-certs-132595 localhost minikube]
	I0916 12:02:29.295413  392749 provision.go:177] copyRemoteCerts
	I0916 12:02:29.295493  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 12:02:29.295539  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.314056  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:29.410212  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 12:02:29.433316  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 12:02:29.457490  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 12:02:29.480514  392749 provision.go:87] duration metric: took 294.616578ms to configureAuth
	I0916 12:02:29.480546  392749 ubuntu.go:193] setting minikube options for container-runtime
	I0916 12:02:29.480721  392749 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:29.480840  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.499779  392749 main.go:141] libmachine: Using SSH client type: native
	I0916 12:02:29.499970  392749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0916 12:02:29.499988  392749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 12:02:29.724131  392749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 12:02:29.724156  392749 machine.go:96] duration metric: took 1.016241182s to provisionDockerMachine
	I0916 12:02:29.724168  392749 client.go:171] duration metric: took 7.142967574s to LocalClient.Create
	I0916 12:02:29.724184  392749 start.go:167] duration metric: took 7.143028884s to libmachine.API.Create "embed-certs-132595"
	I0916 12:02:29.724192  392749 start.go:293] postStartSetup for "embed-certs-132595" (driver="docker")
	I0916 12:02:29.724206  392749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 12:02:29.724308  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 12:02:29.724425  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.742132  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:29.838555  392749 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 12:02:29.841984  392749 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 12:02:29.842030  392749 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 12:02:29.842042  392749 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 12:02:29.842049  392749 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 12:02:29.842061  392749 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 12:02:29.842134  392749 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 12:02:29.842223  392749 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 12:02:29.842335  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 12:02:29.850676  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:02:29.874023  392749 start.go:296] duration metric: took 149.81451ms for postStartSetup
	I0916 12:02:29.874395  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:29.891665  392749 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	I0916 12:02:29.891935  392749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 12:02:29.891976  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:29.910185  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.002481  392749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 12:02:30.007206  392749 start.go:128] duration metric: took 7.429401034s to createHost
	I0916 12:02:30.007234  392749 start.go:83] releasing machines lock for "embed-certs-132595", held for 7.4295318s
	I0916 12:02:30.007311  392749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:02:30.025002  392749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 12:02:30.025037  392749 ssh_runner.go:195] Run: cat /version.json
	I0916 12:02:30.025102  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:30.025103  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:30.043705  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.044185  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:30.210633  392749 ssh_runner.go:195] Run: systemctl --version
	I0916 12:02:30.215247  392749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 12:02:30.353292  392749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 12:02:30.357777  392749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:02:30.376319  392749 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 12:02:30.376406  392749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:02:30.406228  392749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 12:02:30.406253  392749 start.go:495] detecting cgroup driver to use...
	I0916 12:02:30.406283  392749 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 12:02:30.406323  392749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 12:02:30.421100  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 12:02:30.432505  392749 docker.go:217] disabling cri-docker service (if available) ...
	I0916 12:02:30.432561  392749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 12:02:30.445665  392749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 12:02:30.459366  392749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 12:02:30.541779  392749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 12:02:30.620528  392749 docker.go:233] disabling docker service ...
	I0916 12:02:30.620593  392749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 12:02:30.640092  392749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 12:02:30.651391  392749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 12:02:30.734601  392749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 12:02:30.821037  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 12:02:30.832165  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 12:02:30.847898  392749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 12:02:30.847957  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.858440  392749 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 12:02:30.858500  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.868040  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.877381  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.886632  392749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 12:02:30.895686  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.905708  392749 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.921634  392749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:02:30.931283  392749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 12:02:30.939458  392749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 12:02:30.947335  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:31.023886  392749 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 12:02:31.126953  392749 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 12:02:31.127024  392749 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 12:02:31.130456  392749 start.go:563] Will wait 60s for crictl version
	I0916 12:02:31.130515  392749 ssh_runner.go:195] Run: which crictl
	I0916 12:02:31.134039  392749 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 12:02:31.166783  392749 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 12:02:31.166863  392749 ssh_runner.go:195] Run: crio --version
	I0916 12:02:31.202361  392749 ssh_runner.go:195] Run: crio --version
	I0916 12:02:31.240370  392749 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 12:02:31.241854  392749 cli_runner.go:164] Run: docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 12:02:31.258991  392749 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 12:02:31.262509  392749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:02:31.272708  392749 kubeadm.go:883] updating cluster {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 12:02:31.272831  392749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:02:31.272875  392749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:02:31.336596  392749 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:02:31.336618  392749 crio.go:433] Images already preloaded, skipping extraction
	I0916 12:02:31.336662  392749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:02:31.370353  392749 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:02:31.370403  392749 cache_images.go:84] Images are preloaded, skipping loading
	I0916 12:02:31.370410  392749 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 12:02:31.370494  392749 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-132595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 12:02:31.370555  392749 ssh_runner.go:195] Run: crio config
	I0916 12:02:31.414217  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:31.414235  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:31.414244  392749 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 12:02:31.414263  392749 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-132595 NodeName:embed-certs-132595 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 12:02:31.414385  392749 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-132595"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 12:02:31.414491  392749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 12:02:31.423224  392749 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 12:02:31.423288  392749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 12:02:31.431649  392749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I0916 12:02:31.448899  392749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 12:02:31.465819  392749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0916 12:02:31.484203  392749 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 12:02:31.487892  392749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:02:31.498931  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:31.578175  392749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:02:31.591266  392749 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595 for IP: 192.168.103.2
	I0916 12:02:31.591291  392749 certs.go:194] generating shared ca certs ...
	I0916 12:02:31.591306  392749 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.591451  392749 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 12:02:31.591500  392749 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 12:02:31.591510  392749 certs.go:256] generating profile certs ...
	I0916 12:02:31.591562  392749 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key
	I0916 12:02:31.591590  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt with IP's: []
	I0916 12:02:31.709220  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt ...
	I0916 12:02:31.709248  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.crt: {Name:mka1d5a1edf02835642de8bdc842db8cd676a26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.709443  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key ...
	I0916 12:02:31.709455  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key: {Name:mk9ae7714dfa095c3ad43e583257aef75ede0041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:31.709547  392749 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d
	I0916 12:02:31.709562  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 12:02:32.044005  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d ...
	I0916 12:02:32.044031  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d: {Name:mk6feeeb4fe0f8ff0e129b6995e86e98cd2ff58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.044202  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d ...
	I0916 12:02:32.044216  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d: {Name:mk11b35c16267750006ba91ba79ac0aeb369ed92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.044290  392749 certs.go:381] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt.6488143d -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt
	I0916 12:02:32.044387  392749 certs.go:385] copying /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d -> /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key
	I0916 12:02:32.044449  392749 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key
	I0916 12:02:32.044464  392749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt with IP's: []
	I0916 12:02:32.194505  392749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt ...
	I0916 12:02:32.194536  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt: {Name:mk85df2a3dc9e98fc7219fc1ae15551b09a34988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.194715  392749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key ...
	I0916 12:02:32.194728  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key: {Name:mk563636bd095728c8aa5b89edf7c40089c8fbee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:32.194890  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 12:02:32.194928  392749 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 12:02:32.194939  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 12:02:32.194961  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 12:02:32.194983  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 12:02:32.195002  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 12:02:32.195037  392749 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:02:32.195649  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 12:02:32.220426  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 12:02:32.243923  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 12:02:32.267291  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 12:02:32.290723  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 12:02:32.316617  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 12:02:32.339515  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 12:02:32.362149  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 12:02:32.385198  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 12:02:32.407529  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 12:02:32.430567  392749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 12:02:32.454462  392749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 12:02:32.471580  392749 ssh_runner.go:195] Run: openssl version
	I0916 12:02:32.476960  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 12:02:32.485882  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.489223  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.489271  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 12:02:32.495629  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 12:02:32.505521  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 12:02:32.514505  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.517905  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.517965  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:02:32.524346  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 12:02:32.533311  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 12:02:32.542142  392749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.545523  392749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.545594  392749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 12:02:32.552365  392749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 12:02:32.561599  392749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 12:02:32.565017  392749 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 12:02:32.565075  392749 kubeadm.go:392] StartCluster: {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:02:32.565168  392749 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 12:02:32.565223  392749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 12:02:32.600725  392749 cri.go:89] found id: ""
	I0916 12:02:32.600782  392749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 12:02:32.610233  392749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 12:02:32.618479  392749 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 12:02:32.618526  392749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 12:02:32.626556  392749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 12:02:32.626577  392749 kubeadm.go:157] found existing configuration files:
	
	I0916 12:02:32.626634  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 12:02:32.634946  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 12:02:32.635015  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 12:02:32.644145  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 12:02:32.652426  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 12:02:32.652498  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 12:02:32.660703  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 12:02:32.669431  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 12:02:32.669498  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 12:02:32.678101  392749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 12:02:32.686434  392749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 12:02:32.686497  392749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 12:02:32.694571  392749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 12:02:32.733058  392749 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 12:02:32.733397  392749 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 12:02:32.749836  392749 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 12:02:32.749917  392749 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 12:02:32.749956  392749 kubeadm.go:310] OS: Linux
	I0916 12:02:32.750007  392749 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 12:02:32.750062  392749 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 12:02:32.750114  392749 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 12:02:32.750170  392749 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 12:02:32.750227  392749 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 12:02:32.750313  392749 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 12:02:32.750390  392749 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 12:02:32.750464  392749 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 12:02:32.750559  392749 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 12:02:32.804063  392749 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 12:02:32.804209  392749 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 12:02:32.804363  392749 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 12:02:32.810567  392749 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 12:02:32.813202  392749 out.go:235]   - Generating certificates and keys ...
	I0916 12:02:32.813305  392749 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 12:02:32.813417  392749 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 12:02:33.063616  392749 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 12:02:33.364335  392749 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 12:02:33.538855  392749 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 12:02:33.629054  392749 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 12:02:33.726242  392749 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 12:02:33.726358  392749 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-132595 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 12:02:33.819559  392749 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 12:02:33.819747  392749 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-132595 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 12:02:34.040985  392749 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 12:02:34.313148  392749 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 12:02:34.371964  392749 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 12:02:34.372034  392749 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 12:02:34.533586  392749 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 12:02:34.613255  392749 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 12:02:34.821003  392749 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 12:02:35.043370  392749 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 12:02:35.119304  392749 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 12:02:35.119834  392749 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 12:02:35.122405  392749 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 12:02:35.124431  392749 out.go:235]   - Booting up control plane ...
	I0916 12:02:35.124649  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 12:02:35.124761  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 12:02:35.125024  392749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 12:02:35.134352  392749 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 12:02:35.139484  392749 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 12:02:35.139586  392749 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 12:02:35.219432  392749 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 12:02:35.219608  392749 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 12:02:35.721291  392749 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.804738ms
	I0916 12:02:35.721439  392749 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 12:02:40.722572  392749 kubeadm.go:310] [api-check] The API server is healthy after 5.001398972s
	I0916 12:02:40.734247  392749 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 12:02:40.746322  392749 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 12:02:40.764937  392749 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 12:02:40.765155  392749 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-132595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 12:02:40.772968  392749 kubeadm.go:310] [bootstrap-token] Using token: 7gckm0.d0i7kpdezz05toci
	I0916 12:02:40.774166  392749 out.go:235]   - Configuring RBAC rules ...
	I0916 12:02:40.774305  392749 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 12:02:40.777587  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 12:02:40.783580  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 12:02:40.786051  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 12:02:40.788495  392749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 12:02:40.790852  392749 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 12:02:41.130233  392749 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 12:02:41.550016  392749 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 12:02:42.129585  392749 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 12:02:42.130524  392749 kubeadm.go:310] 
	I0916 12:02:42.130589  392749 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 12:02:42.130596  392749 kubeadm.go:310] 
	I0916 12:02:42.130670  392749 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 12:02:42.130680  392749 kubeadm.go:310] 
	I0916 12:02:42.130716  392749 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 12:02:42.130790  392749 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 12:02:42.130890  392749 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 12:02:42.130913  392749 kubeadm.go:310] 
	I0916 12:02:42.131015  392749 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 12:02:42.131033  392749 kubeadm.go:310] 
	I0916 12:02:42.131106  392749 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 12:02:42.131126  392749 kubeadm.go:310] 
	I0916 12:02:42.131207  392749 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 12:02:42.131315  392749 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 12:02:42.131419  392749 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 12:02:42.131430  392749 kubeadm.go:310] 
	I0916 12:02:42.131547  392749 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 12:02:42.131658  392749 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 12:02:42.131670  392749 kubeadm.go:310] 
	I0916 12:02:42.131792  392749 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7gckm0.d0i7kpdezz05toci \
	I0916 12:02:42.131887  392749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 \
	I0916 12:02:42.131918  392749 kubeadm.go:310] 	--control-plane 
	I0916 12:02:42.131928  392749 kubeadm.go:310] 
	I0916 12:02:42.132048  392749 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 12:02:42.132058  392749 kubeadm.go:310] 
	I0916 12:02:42.132192  392749 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7gckm0.d0i7kpdezz05toci \
	I0916 12:02:42.132353  392749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f35b678990fde8e297cf00a187b826891e0ec2054ac3d72fabe825459c995316 
	I0916 12:02:42.135019  392749 kubeadm.go:310] W0916 12:02:32.730384    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:02:42.135280  392749 kubeadm.go:310] W0916 12:02:32.731017    1320 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 12:02:42.135512  392749 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 12:02:42.135628  392749 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 12:02:42.135653  392749 cni.go:84] Creating CNI manager for ""
	I0916 12:02:42.135665  392749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:02:42.138549  392749 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 12:02:42.139971  392749 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 12:02:42.143956  392749 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 12:02:42.143980  392749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 12:02:42.161613  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 12:02:42.354438  392749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 12:02:42.354510  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:42.354533  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-132595 minikube.k8s.io/updated_at=2024_09_16T12_02_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=embed-certs-132595 minikube.k8s.io/primary=true
	I0916 12:02:42.504120  392749 ops.go:34] apiserver oom_adj: -16
	I0916 12:02:42.504136  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:43.004526  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:43.504277  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:44.004404  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:44.505245  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:45.004800  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:45.505119  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.004265  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.505238  392749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 12:02:46.574458  392749 kubeadm.go:1113] duration metric: took 4.220005009s to wait for elevateKubeSystemPrivileges
	I0916 12:02:46.574491  392749 kubeadm.go:394] duration metric: took 14.009421351s to StartCluster
	I0916 12:02:46.574511  392749 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:46.574575  392749 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:02:46.576211  392749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:02:46.576447  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 12:02:46.576447  392749 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:02:46.576532  392749 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 12:02:46.576624  392749 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132595"
	I0916 12:02:46.576643  392749 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132595"
	I0916 12:02:46.576668  392749 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:02:46.576688  392749 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132595"
	I0916 12:02:46.576656  392749 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-132595"
	I0916 12:02:46.576774  392749 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:02:46.577752  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.577919  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.579915  392749 out.go:177] * Verifying Kubernetes components...
	I0916 12:02:46.581417  392749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:02:46.602279  392749 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 12:02:46.603012  392749 addons.go:234] Setting addon default-storageclass=true in "embed-certs-132595"
	I0916 12:02:46.603054  392749 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:02:46.603533  392749 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:02:46.603631  392749 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:02:46.603648  392749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 12:02:46.603704  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:46.623554  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:46.627265  392749 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 12:02:46.627294  392749 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 12:02:46.627365  392749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:02:46.652386  392749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:02:46.708797  392749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 12:02:46.808404  392749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:02:46.819270  392749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:02:47.003989  392749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 12:02:47.322159  392749 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 12:02:47.325904  392749 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:02:47.699966  392749 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 12:02:47.701431  392749 addons.go:510] duration metric: took 1.124891375s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 12:02:47.827676  392749 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-132595" context rescaled to 1 replicas
	I0916 12:02:49.329713  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:51.829363  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:53.829520  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:55.843996  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:02:58.329318  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:00.329424  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:02.329815  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:04.829037  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:06.829132  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:08.829193  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:11.329223  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:13.829206  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:16.329632  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:18.829227  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:21.329475  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:23.829047  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:25.829344  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:27.830100  392749 node_ready.go:53] node "embed-certs-132595" has status "Ready":"False"
	I0916 12:03:28.329181  392749 node_ready.go:49] node "embed-certs-132595" has status "Ready":"True"
	I0916 12:03:28.329205  392749 node_ready.go:38] duration metric: took 41.003271788s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:03:28.329215  392749 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:03:28.335458  392749 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.841863  392749 pod_ready.go:93] pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.841891  392749 pod_ready.go:82] duration metric: took 1.506402305s for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.841902  392749 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.846650  392749 pod_ready.go:93] pod "etcd-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.846672  392749 pod_ready.go:82] duration metric: took 4.765058ms for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.846685  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.851324  392749 pod_ready.go:93] pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.851348  392749 pod_ready.go:82] duration metric: took 4.655631ms for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.851361  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.855398  392749 pod_ready.go:93] pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.855418  392749 pod_ready.go:82] duration metric: took 4.049899ms for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.855427  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.929577  392749 pod_ready.go:93] pod "kube-proxy-5jjq9" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:29.929598  392749 pod_ready.go:82] duration metric: took 74.164746ms for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:29.929610  392749 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:30.329892  392749 pod_ready.go:93] pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:30.329915  392749 pod_ready.go:82] duration metric: took 400.298548ms for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:30.329926  392749 pod_ready.go:39] duration metric: took 2.000698892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:03:30.329943  392749 api_server.go:52] waiting for apiserver process to appear ...
	I0916 12:03:30.329991  392749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 12:03:30.341465  392749 api_server.go:72] duration metric: took 43.764981639s to wait for apiserver process to appear ...
	I0916 12:03:30.341495  392749 api_server.go:88] waiting for apiserver healthz status ...
	I0916 12:03:30.341517  392749 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 12:03:30.346926  392749 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 12:03:30.347857  392749 api_server.go:141] control plane version: v1.31.1
	I0916 12:03:30.347882  392749 api_server.go:131] duration metric: took 6.380265ms to wait for apiserver health ...
	I0916 12:03:30.347891  392749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 12:03:30.533749  392749 system_pods.go:59] 8 kube-system pods found
	I0916 12:03:30.533781  392749 system_pods.go:61] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:03:30.533787  392749 system_pods.go:61] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:03:30.533791  392749 system_pods.go:61] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:03:30.533795  392749 system_pods.go:61] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:03:30.533798  392749 system_pods.go:61] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:03:30.533801  392749 system_pods.go:61] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:03:30.533805  392749 system_pods.go:61] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:03:30.533808  392749 system_pods.go:61] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:03:30.533814  392749 system_pods.go:74] duration metric: took 185.917389ms to wait for pod list to return data ...
	I0916 12:03:30.533821  392749 default_sa.go:34] waiting for default service account to be created ...
	I0916 12:03:30.730240  392749 default_sa.go:45] found service account: "default"
	I0916 12:03:30.730269  392749 default_sa.go:55] duration metric: took 196.441382ms for default service account to be created ...
	I0916 12:03:30.730278  392749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 12:03:30.932106  392749 system_pods.go:86] 8 kube-system pods found
	I0916 12:03:30.932141  392749 system_pods.go:89] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:03:30.932149  392749 system_pods.go:89] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:03:30.932155  392749 system_pods.go:89] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:03:30.932160  392749 system_pods.go:89] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:03:30.932165  392749 system_pods.go:89] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:03:30.932170  392749 system_pods.go:89] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:03:30.932175  392749 system_pods.go:89] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:03:30.932180  392749 system_pods.go:89] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:03:30.932189  392749 system_pods.go:126] duration metric: took 201.903374ms to wait for k8s-apps to be running ...
	I0916 12:03:30.932199  392749 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 12:03:30.932250  392749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 12:03:30.943724  392749 system_svc.go:56] duration metric: took 11.513209ms WaitForService to wait for kubelet
	I0916 12:03:30.943753  392749 kubeadm.go:582] duration metric: took 44.367276865s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:03:30.943776  392749 node_conditions.go:102] verifying NodePressure condition ...
	I0916 12:03:31.130459  392749 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 12:03:31.130490  392749 node_conditions.go:123] node cpu capacity is 8
	I0916 12:03:31.130506  392749 node_conditions.go:105] duration metric: took 186.72463ms to run NodePressure ...
	I0916 12:03:31.130519  392749 start.go:241] waiting for startup goroutines ...
	I0916 12:03:31.130528  392749 start.go:246] waiting for cluster config update ...
	I0916 12:03:31.130542  392749 start.go:255] writing updated cluster config ...
	I0916 12:03:31.130846  392749 ssh_runner.go:195] Run: rm -f paused
	I0916 12:03:31.136980  392749 out.go:177] * Done! kubectl is now configured to use "embed-certs-132595" cluster and "default" namespace by default
	E0916 12:03:31.138336  392749 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575178636Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.11.3" id=c366e22a-5e1f-491e-9035-6fd374bfe7b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575303969Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=59687f8b-e7c9-4348-8d28-901638ae293b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575368833Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c366e22a-5e1f-491e-9035-6fd374bfe7b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.575400341Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.576002923Z" level=info msg="Creating container: kube-system/coredns-7c65d6cfc9-lmhpj/coredns" id=8d08e494-ec0b-4b6d-be32-c6718a9b0d95 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.576092718Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.586948820Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e375941d0176fe56097181bec36e35d52a5a7d8cd1d147901099904551bc4537/merged/etc/passwd: no such file or directory"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.586986448Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e375941d0176fe56097181bec36e35d52a5a7d8cd1d147901099904551bc4537/merged/etc/group: no such file or directory"
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.628157312Z" level=info msg="Created container 04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02: kube-system/storage-provisioner/storage-provisioner" id=59687f8b-e7c9-4348-8d28-901638ae293b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.628774234Z" level=info msg="Starting container: 04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02" id=c6eae36f-652b-4bb4-ba8e-a3d4aef3a5a8 name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.636335332Z" level=info msg="Started container" PID=2235 containerID=04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02 description=kube-system/storage-provisioner/storage-provisioner id=c6eae36f-652b-4bb4-ba8e-a3d4aef3a5a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e29106fe85d76fa4de619461e9d9494576a9dccd6c91884e0f1da2ca7d20785d
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.638679752Z" level=info msg="Created container 6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454: kube-system/coredns-7c65d6cfc9-lmhpj/coredns" id=8d08e494-ec0b-4b6d-be32-c6718a9b0d95 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.639344270Z" level=info msg="Starting container: 6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454" id=08658758-7578-4590-a126-dd9c8a60b47c name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:03:28 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:28.646205266Z" level=info msg="Started container" PID=2253 containerID=6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454 description=kube-system/coredns-7c65d6cfc9-lmhpj/coredns id=08658758-7578-4590-a126-dd9c8a60b47c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0699e4a1527a5e22ec2c5a3eae9411ebd5f65603f653f4ead716e20f5f2ea774
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.039703019Z" level=info msg="Running pod sandbox: kube-system/metrics-server-6867b74b74-rhxfx/POD" id=3724d12d-2101-4ea2-9639-26cdc4cb728d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.039788518Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.055492477Z" level=info msg="Got pod network &{Name:metrics-server-6867b74b74-rhxfx Namespace:kube-system ID:36e43fda22f270d21797cd853e67cbbe7f8c57be0b84fd0ad9578fb012d93ca5 UID:1f7ed956-692d-4b25-9cbf-8f79cf304d25 NetNS:/var/run/netns/eabadb78-33e7-4754-b410-6ba3bbff4fa9 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.055539108Z" level=info msg="Adding pod kube-system_metrics-server-6867b74b74-rhxfx to CNI network \"kindnet\" (type=ptp)"
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.064737964Z" level=info msg="Got pod network &{Name:metrics-server-6867b74b74-rhxfx Namespace:kube-system ID:36e43fda22f270d21797cd853e67cbbe7f8c57be0b84fd0ad9578fb012d93ca5 UID:1f7ed956-692d-4b25-9cbf-8f79cf304d25 NetNS:/var/run/netns/eabadb78-33e7-4754-b410-6ba3bbff4fa9 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.064861061Z" level=info msg="Checking pod kube-system_metrics-server-6867b74b74-rhxfx for CNI network kindnet (type=ptp)"
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.067670298Z" level=info msg="Ran pod sandbox 36e43fda22f270d21797cd853e67cbbe7f8c57be0b84fd0ad9578fb012d93ca5 with infra container: kube-system/metrics-server-6867b74b74-rhxfx/POD" id=3724d12d-2101-4ea2-9639-26cdc4cb728d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.069054352Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9903c375-9aea-4f88-895f-562b67f1357a name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.069290577Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9903c375-9aea-4f88-895f-562b67f1357a name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.070054720Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=18a854f5-fe5f-4867-8293-b3a974788c47 name=/runtime.v1.ImageService/PullImage
	Sep 16 12:03:36 embed-certs-132595 crio[1034]: time="2024-09-16 12:03:36.101851045Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6198a816b1cdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago        Running             coredns                   0                   0699e4a1527a5       coredns-7c65d6cfc9-lmhpj
	04bb82a52f980       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 seconds ago        Running             storage-provisioner       0                   e29106fe85d76       storage-provisioner
	7e5dd3f1d7192       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   49 seconds ago       Running             kindnet-cni               0                   71c9e456d6bf1       kindnet-s4vkq
	044a317804ef8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   49 seconds ago       Running             kube-proxy                0                   d7e6cbd74393e       kube-proxy-5jjq9
	aa00ff4074279       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      0                   079241af2ce23       etcd-embed-certs-132595
	43acde1f85a74       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Running             kube-controller-manager   0                   9e06df2fe014b       kube-controller-manager-embed-certs-132595
	2e260ff8685de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Running             kube-scheduler            0                   f4b76295d450a       kube-scheduler-embed-certs-132595
	09176dad2cb1c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Running             kube-apiserver            0                   06c0e46f86ec9       kube-apiserver-embed-certs-132595
	
	
	==> coredns [6198a816b1cdf5dfcb5d1b9fdf79c2664e44ee6a169a466103a679b2acc82454] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43571 - 19451 "HINFO IN 8873162753112370163.8194975584790532838. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011764927s
	
	
	==> describe nodes <==
	Name:               embed-certs-132595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-132595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-132595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T12_02_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 12:02:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-132595
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 12:03:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 12:03:28 +0000   Mon, 16 Sep 2024 12:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-132595
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdbbf6049dff4c2fbfb05ee6d4e44c79
	  System UUID:                ac9bc1b7-26e7-4faa-ad97-c61b5564343d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-lmhpj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     50s
	  kube-system                 etcd-embed-certs-132595                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         55s
	  kube-system                 kindnet-s4vkq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      50s
	  kube-system                 kube-apiserver-embed-certs-132595             250m (3%)     0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-controller-manager-embed-certs-132595    200m (2%)     0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-proxy-5jjq9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-embed-certs-132595             100m (1%)     0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 metrics-server-6867b74b74-rhxfx               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 48s                kube-proxy       
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x7 over 61s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 55s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  55s                kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s                kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s                kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                node-controller  Node embed-certs-132595 event: Registered Node embed-certs-132595 in Controller
	  Normal   NodeReady                8s                 kubelet          Node embed-certs-132595 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.954619] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.059994] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000007] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +6.207537] net_ratelimit: 5 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +8.191403] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.003944] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000004] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-22c51b08b0ca
	[  +0.000002] ll header: 00000000: 02 42 6f 4b 58 7e 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [aa00ff407427948b5e089e635cd56649f686d7a6c9e475586db68aae2101c56f] <==
	{"level":"info","ts":"2024-09-16T12:02:36.601518Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T12:02:36.601616Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:02:36.601663Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:02:36.601767Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T12:02:36.601799Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T12:02:37.030837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2024-09-16T12:02:37.030963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.030992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T12:02:37.032010Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.032867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:02:37.032863Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-132595 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T12:02:37.032902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:02:37.033174Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T12:02:37.033206Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T12:02:37.033382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.033486Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.033512Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:02:37.034156Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:02:37.034157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:02:37.035300Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T12:02:37.035303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:03:36 up  1:45,  0 users,  load average: 1.83, 1.22, 1.01
	Linux embed-certs-132595 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7e5dd3f1d71925f826db082ad675d3101e7aae3acce70fd7b76a514c9a89f6fd] <==
	W0916 12:03:18.015731       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015833       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015860       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0916 12:03:18.015837       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:18.015898       1 trace.go:236] Trace[31706214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[31706214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[31706214]: [30.001681318s] [30.001681318s] END
	I0916 12:03:18.015898       1 trace.go:236] Trace[592354911]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[592354911]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[592354911]: [30.001620265s] [30.001620265s] END
	E0916 12:03:18.015924       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 12:03:18.015923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:18.015935       1 trace.go:236] Trace[1491040377]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[1491040377]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[1491040377]: [30.001712492s] [30.001712492s] END
	I0916 12:03:18.015935       1 trace.go:236] Trace[2145919099]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (16-Sep-2024 12:02:48.014) (total time: 30001ms):
	Trace[2145919099]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:03:18.015)
	Trace[2145919099]: [30.001694697s] [30.001694697s] END
	E0916 12:03:18.015954       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0916 12:03:18.015960       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0916 12:03:19.315171       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 12:03:19.315197       1 metrics.go:61] Registering metrics
	I0916 12:03:19.315264       1 controller.go:374] Syncing nftables rules
	I0916 12:03:28.021447       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:03:28.021489       1 main.go:299] handling current node
	
	
	==> kube-apiserver [09176dad2cb1c2af9cd3430fb4f7fca0bd2ff37e3126706cf8504f9f1f4f54cc] <==
	E0916 12:03:35.404100       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 12:03:35.405455       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 12:03:35.529592       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.106.121.198"}
	W0916 12:03:35.537245       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 12:03:35.537432       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 12:03:35.540855       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 12:03:35.540905       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 12:03:36.399439       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 12:03:36.399467       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 12:03:36.399484       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 12:03:36.399556       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 12:03:36.400600       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 12:03:36.400623       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [43acde1f85a74d4bd7d60bb7ed1dbd6e59079441a502cee1be854f6abe5e35b6] <==
	I0916 12:02:46.005761       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 12:02:46.362324       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 12:02:46.362359       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 12:02:46.373448       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 12:02:46.510926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:02:47.005464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="311.027327ms"
	I0916 12:02:47.016485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.96288ms"
	I0916 12:02:47.016598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.136µs"
	I0916 12:02:47.016676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="46.065µs"
	I0916 12:02:47.095940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.503µs"
	I0916 12:02:47.497738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.189918ms"
	I0916 12:02:47.507073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.188248ms"
	I0916 12:02:47.507293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.054µs"
	I0916 12:03:28.228034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:03:28.237164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-132595"
	I0916 12:03:28.243973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="113.42µs"
	I0916 12:03:28.254166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.618µs"
	I0916 12:03:29.601516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.101551ms"
	I0916 12:03:29.601730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.683µs"
	I0916 12:03:31.009954       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 12:03:35.438584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="20.117889ms"
	I0916 12:03:35.445933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="7.207966ms"
	I0916 12:03:35.493895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="87.244µs"
	I0916 12:03:35.495041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="120.903µs"
	I0916 12:03:36.601918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="68.081µs"
	
	
	==> kube-proxy [044a317804ef8bd211cafdc21ae7bf14d25d5e48ffbf28d2a623796fc0f3bec3] <==
	I0916 12:02:47.629252       1 server_linux.go:66] "Using iptables proxy"
	I0916 12:02:47.753705       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 12:02:47.753773       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 12:02:47.773672       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 12:02:47.773734       1 server_linux.go:169] "Using iptables Proxier"
	I0916 12:02:47.775582       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 12:02:47.775952       1 server.go:483] "Version info" version="v1.31.1"
	I0916 12:02:47.775990       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 12:02:47.778496       1 config.go:105] "Starting endpoint slice config controller"
	I0916 12:02:47.778576       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 12:02:47.778508       1 config.go:328] "Starting node config controller"
	I0916 12:02:47.778658       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 12:02:47.778538       1 config.go:199] "Starting service config controller"
	I0916 12:02:47.778717       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 12:02:47.879076       1 shared_informer.go:320] Caches are synced for service config
	I0916 12:02:47.879101       1 shared_informer.go:320] Caches are synced for node config
	I0916 12:02:47.879085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2e260ff8685de88af344ef117d8cbfa1ff17b511040b27ce76779a023b1eaa4d] <==
	W0916 12:02:39.021480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 12:02:39.021498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.021539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 12:02:39.021562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.021575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 12:02:39.021601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.859282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 12:02:39.859324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.895908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 12:02:39.895947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.912462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 12:02:39.912503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.947122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 12:02:39.947168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.965072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 12:02:39.965130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.967121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 12:02:39.967256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:39.977801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 12:02:39.977847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 12:02:40.200165       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 12:02:40.200204       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 12:02:40.236012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 12:02:40.236065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 12:02:42.917675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 12:02:48 embed-certs-132595 kubelet[1652]: I0916 12:02:48.517119    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jjq9" podStartSLOduration=2.5170927770000002 podStartE2EDuration="2.517092777s" podCreationTimestamp="2024-09-16 12:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:02:48.517001015 +0000 UTC m=+7.192669259" watchObservedRunningTime="2024-09-16 12:02:48.517092777 +0000 UTC m=+7.192761020"
	Sep 16 12:02:51 embed-certs-132595 kubelet[1652]: E0916 12:02:51.434502    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488171434318936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:02:51 embed-certs-132595 kubelet[1652]: E0916 12:02:51.434548    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488171434318936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:01 embed-certs-132595 kubelet[1652]: E0916 12:03:01.435826    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488181435634385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:01 embed-certs-132595 kubelet[1652]: E0916 12:03:01.435872    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488181435634385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:11 embed-certs-132595 kubelet[1652]: E0916 12:03:11.437372    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488191437189942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:11 embed-certs-132595 kubelet[1652]: E0916 12:03:11.437413    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488191437189942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:21 embed-certs-132595 kubelet[1652]: E0916 12:03:21.438567    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488201438406245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:21 embed-certs-132595 kubelet[1652]: E0916 12:03:21.438612    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488201438406245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.218921    1652 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387462    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec7e28f-bb5b-4238-abf8-a17607466015-config-volume\") pod \"coredns-7c65d6cfc9-lmhpj\" (UID: \"dec7e28f-bb5b-4238-abf8-a17607466015\") " pod="kube-system/coredns-7c65d6cfc9-lmhpj"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387509    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2qs\" (UniqueName: \"kubernetes.io/projected/dec7e28f-bb5b-4238-abf8-a17607466015-kube-api-access-qm2qs\") pod \"coredns-7c65d6cfc9-lmhpj\" (UID: \"dec7e28f-bb5b-4238-abf8-a17607466015\") " pod="kube-system/coredns-7c65d6cfc9-lmhpj"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387532    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b94fecd1-4b72-474b-9296-fb5c86912f64-tmp\") pod \"storage-provisioner\" (UID: \"b94fecd1-4b72-474b-9296-fb5c86912f64\") " pod="kube-system/storage-provisioner"
	Sep 16 12:03:28 embed-certs-132595 kubelet[1652]: I0916 12:03:28.387546    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w5lv\" (UniqueName: \"kubernetes.io/projected/b94fecd1-4b72-474b-9296-fb5c86912f64-kube-api-access-2w5lv\") pod \"storage-provisioner\" (UID: \"b94fecd1-4b72-474b-9296-fb5c86912f64\") " pod="kube-system/storage-provisioner"
	Sep 16 12:03:29 embed-certs-132595 kubelet[1652]: I0916 12:03:29.584364    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.584345227 podStartE2EDuration="42.584345227s" podCreationTimestamp="2024-09-16 12:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:03:29.58423399 +0000 UTC m=+48.259902234" watchObservedRunningTime="2024-09-16 12:03:29.584345227 +0000 UTC m=+48.260013470"
	Sep 16 12:03:31 embed-certs-132595 kubelet[1652]: E0916 12:03:31.439757    1652 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488211439535902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:31 embed-certs-132595 kubelet[1652]: E0916 12:03:31.439799    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488211439535902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:03:35 embed-certs-132595 kubelet[1652]: I0916 12:03:35.437253    1652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lmhpj" podStartSLOduration=49.437223923 podStartE2EDuration="49.437223923s" podCreationTimestamp="2024-09-16 12:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 12:03:29.594123806 +0000 UTC m=+48.269792050" watchObservedRunningTime="2024-09-16 12:03:35.437223923 +0000 UTC m=+54.112892167"
	Sep 16 12:03:35 embed-certs-132595 kubelet[1652]: I0916 12:03:35.636541    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlwj\" (UniqueName: \"kubernetes.io/projected/1f7ed956-692d-4b25-9cbf-8f79cf304d25-kube-api-access-xwlwj\") pod \"metrics-server-6867b74b74-rhxfx\" (UID: \"1f7ed956-692d-4b25-9cbf-8f79cf304d25\") " pod="kube-system/metrics-server-6867b74b74-rhxfx"
	Sep 16 12:03:35 embed-certs-132595 kubelet[1652]: I0916 12:03:35.636599    1652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1f7ed956-692d-4b25-9cbf-8f79cf304d25-tmp-dir\") pod \"metrics-server-6867b74b74-rhxfx\" (UID: \"1f7ed956-692d-4b25-9cbf-8f79cf304d25\") " pod="kube-system/metrics-server-6867b74b74-rhxfx"
	Sep 16 12:03:36 embed-certs-132595 kubelet[1652]: E0916 12:03:36.140737    1652 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 12:03:36 embed-certs-132595 kubelet[1652]: E0916 12:03:36.140818    1652 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 12:03:36 embed-certs-132595 kubelet[1652]: E0916 12:03:36.141077    1652 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwlwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-rhxfx_kube-system(1f7ed956-692d-4b25-9cbf-8f79cf304d25): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 16 12:03:36 embed-certs-132595 kubelet[1652]: E0916 12:03:36.142302    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-rhxfx" podUID="1f7ed956-692d-4b25-9cbf-8f79cf304d25"
	Sep 16 12:03:36 embed-certs-132595 kubelet[1652]: E0916 12:03:36.591936    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rhxfx" podUID="1f7ed956-692d-4b25-9cbf-8f79cf304d25"
	
	
	==> storage-provisioner [04bb82a52f9807acdde1b3dd976eefa634d98e0c9d6f2aa005035f46cf3cab02] <==
	I0916 12:03:28.648757       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 12:03:28.659095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 12:03:28.659135       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 12:03:28.701058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 12:03:28.701294       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794!
	I0916 12:03:28.701548       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"145c877d-a7a1-47fc-887a-f3ff6cf439ce", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794 became leader
	I0916 12:03:28.801622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_4fa5d305-3e55-4b97-bd8f-b34b08439794!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (489.355µs)
helpers_test.go:263: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x2xqb" [9915d875-dc88-4715-ae81-f996fbf96461] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004476993s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-132595 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-132595 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (561.238µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-132595 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-132595
helpers_test.go:235: (dbg) docker inspect embed-certs-132595:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95",
	        "Created": "2024-09-16T12:02:27.844570227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 399467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T12:03:43.759944113Z",
	            "FinishedAt": "2024-09-16T12:03:42.837632127Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/hosts",
	        "LogPath": "/var/lib/docker/containers/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95/9f079caa14234752020b9b6cf52d4f65b69ac2abd6f17c0e771f75729356eb95-json.log",
	        "Name": "/embed-certs-132595",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-132595:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-132595",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357-init/diff:/var/lib/docker/overlay2/58e42dd829ca1bf90de154e04f9a3def742eb2fea5b93ce94cf032cb543d188e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45644e60e8e9cd44c533efe227f3bc3c4eabfe27b8f32d1c879f3895ef688357/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-132595",
	                "Source": "/var/lib/docker/volumes/embed-certs-132595/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-132595",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-132595",
	                "name.minikube.sigs.k8s.io": "embed-certs-132595",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a79203c2e79adf60b9e5b0e153e0113002f553de728df9d799869f44fc63bec",
	            "SandboxKey": "/var/run/docker/netns/6a79203c2e79",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-132595": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2bfc3c9091b0bc051827133f808c3cb85965e63d2bf1e9667fc1a6a160dc08f4",
	                    "EndpointID": "62c05676ae68a12a3520f9683ccca5e22609115719e3308d9004a6707b0f5928",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-132595",
	                        "9f079caa1423"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-132595 logs -n 25: (1.252110367s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-451928  | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-451928       | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 11:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 11:56 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-451928                           | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-451928 | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-451928                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:01 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-483277             | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-483277                  | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-483277 --memory=2200 --alsologtostderr   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-483277 image list                           | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| delete  | -p newest-cni-483277                                   | newest-cni-483277            | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:02 UTC |
	| start   | -p embed-certs-132595                                  | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:02 UTC | 16 Sep 24 12:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-132595            | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:03 UTC | 16 Sep 24 12:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-132595                                  | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:03 UTC | 16 Sep 24 12:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-132595                 | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:03 UTC | 16 Sep 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-132595                                  | embed-certs-132595           | jenkins | v1.34.0 | 16 Sep 24 12:03 UTC | 16 Sep 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 12:03:43
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 12:03:43.360608  399164 out.go:345] Setting OutFile to fd 1 ...
	I0916 12:03:43.360744  399164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:03:43.360756  399164 out.go:358] Setting ErrFile to fd 2...
	I0916 12:03:43.360763  399164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 12:03:43.360954  399164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 12:03:43.361642  399164 out.go:352] Setting JSON to false
	I0916 12:03:43.362997  399164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6363,"bootTime":1726481860,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 12:03:43.363103  399164 start.go:139] virtualization: kvm guest
	I0916 12:03:43.365386  399164 out.go:177] * [embed-certs-132595] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 12:03:43.366831  399164 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 12:03:43.366831  399164 notify.go:220] Checking for updates...
	I0916 12:03:43.369610  399164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 12:03:43.370926  399164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:03:43.372283  399164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 12:03:43.373757  399164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 12:03:43.375263  399164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 12:03:43.377226  399164 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:03:43.377818  399164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 12:03:43.403282  399164 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 12:03:43.403431  399164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:03:43.466750  399164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:03:43.456105532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:03:43.466865  399164 docker.go:318] overlay module found
	I0916 12:03:43.468656  399164 out.go:177] * Using the docker driver based on existing profile
	I0916 12:03:43.469672  399164 start.go:297] selected driver: docker
	I0916 12:03:43.469686  399164 start.go:901] validating driver "docker" against &{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:03:43.469788  399164 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 12:03:43.470511  399164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 12:03:43.527612  399164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 12:03:43.516927952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 12:03:43.527994  399164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:03:43.528034  399164 cni.go:84] Creating CNI manager for ""
	I0916 12:03:43.528075  399164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:03:43.528126  399164 start.go:340] cluster config:
	{Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:03:43.530174  399164 out.go:177] * Starting "embed-certs-132595" primary control-plane node in "embed-certs-132595" cluster
	I0916 12:03:43.531663  399164 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 12:03:43.533182  399164 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 12:03:43.534528  399164 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:03:43.534578  399164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 12:03:43.534590  399164 cache.go:56] Caching tarball of preloaded images
	I0916 12:03:43.534607  399164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 12:03:43.534699  399164 preload.go:172] Found /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 12:03:43.534714  399164 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 12:03:43.534854  399164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	W0916 12:03:43.557214  399164 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 12:03:43.557235  399164 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 12:03:43.557322  399164 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 12:03:43.557380  399164 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 12:03:43.557390  399164 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 12:03:43.557402  399164 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 12:03:43.557413  399164 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 12:03:43.619281  399164 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 12:03:43.619346  399164 cache.go:194] Successfully downloaded all kic artifacts
	I0916 12:03:43.619396  399164 start.go:360] acquireMachinesLock for embed-certs-132595: {Name:mk90285717afa09eeba6eb1eaf13ca243fd0e8ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 12:03:43.619483  399164 start.go:364] duration metric: took 49.062µs to acquireMachinesLock for "embed-certs-132595"
	I0916 12:03:43.619502  399164 start.go:96] Skipping create...Using existing machine configuration
	I0916 12:03:43.619511  399164 fix.go:54] fixHost starting: 
	I0916 12:03:43.619727  399164 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:03:43.638278  399164 fix.go:112] recreateIfNeeded on embed-certs-132595: state=Stopped err=<nil>
	W0916 12:03:43.638315  399164 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 12:03:43.640491  399164 out.go:177] * Restarting existing docker container for "embed-certs-132595" ...
	I0916 12:03:43.641833  399164 cli_runner.go:164] Run: docker start embed-certs-132595
	I0916 12:03:43.938062  399164 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:03:43.956817  399164 kic.go:430] container "embed-certs-132595" state is running.
	I0916 12:03:43.957288  399164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:03:43.975674  399164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/config.json ...
	I0916 12:03:43.975905  399164 machine.go:93] provisionDockerMachine start ...
	I0916 12:03:43.975962  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:43.994669  399164 main.go:141] libmachine: Using SSH client type: native
	I0916 12:03:43.994892  399164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 12:03:43.994907  399164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 12:03:43.995521  399164 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49696->127.0.0.1:33133: read: connection reset by peer
	I0916 12:03:47.129179  399164 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:03:47.129214  399164 ubuntu.go:169] provisioning hostname "embed-certs-132595"
	I0916 12:03:47.129303  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:47.147257  399164 main.go:141] libmachine: Using SSH client type: native
	I0916 12:03:47.147434  399164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 12:03:47.147448  399164 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-132595 && echo "embed-certs-132595" | sudo tee /etc/hostname
	I0916 12:03:47.293282  399164 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132595
	
	I0916 12:03:47.293393  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:47.312311  399164 main.go:141] libmachine: Using SSH client type: native
	I0916 12:03:47.312518  399164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 12:03:47.312546  399164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-132595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-132595/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-132595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 12:03:47.445580  399164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 12:03:47.445624  399164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3799/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3799/.minikube}
	I0916 12:03:47.445653  399164 ubuntu.go:177] setting up certificates
	I0916 12:03:47.445665  399164 provision.go:84] configureAuth start
	I0916 12:03:47.445724  399164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:03:47.463846  399164 provision.go:143] copyHostCerts
	I0916 12:03:47.463907  399164 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem, removing ...
	I0916 12:03:47.463916  399164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem
	I0916 12:03:47.463981  399164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/ca.pem (1082 bytes)
	I0916 12:03:47.464117  399164 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem, removing ...
	I0916 12:03:47.464131  399164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem
	I0916 12:03:47.464160  399164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/cert.pem (1123 bytes)
	I0916 12:03:47.464218  399164 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem, removing ...
	I0916 12:03:47.464225  399164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem
	I0916 12:03:47.464245  399164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3799/.minikube/key.pem (1679 bytes)
	I0916 12:03:47.464297  399164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem org=jenkins.embed-certs-132595 san=[127.0.0.1 192.168.103.2 embed-certs-132595 localhost minikube]
	I0916 12:03:47.580833  399164 provision.go:177] copyRemoteCerts
	I0916 12:03:47.580894  399164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 12:03:47.580928  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:47.601542  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:47.703110  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 12:03:47.728021  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 12:03:47.751305  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 12:03:47.774086  399164 provision.go:87] duration metric: took 328.407627ms to configureAuth
	I0916 12:03:47.774120  399164 ubuntu.go:193] setting minikube options for container-runtime
	I0916 12:03:47.774300  399164 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:03:47.774398  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:47.792245  399164 main.go:141] libmachine: Using SSH client type: native
	I0916 12:03:47.792512  399164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 12:03:47.792538  399164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 12:03:48.098282  399164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 12:03:48.098320  399164 machine.go:96] duration metric: took 4.122401052s to provisionDockerMachine
	I0916 12:03:48.098333  399164 start.go:293] postStartSetup for "embed-certs-132595" (driver="docker")
	I0916 12:03:48.098346  399164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 12:03:48.098417  399164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 12:03:48.098472  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:48.117693  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:48.218838  399164 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 12:03:48.222336  399164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 12:03:48.222377  399164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 12:03:48.222392  399164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 12:03:48.222400  399164 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 12:03:48.222417  399164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/addons for local assets ...
	I0916 12:03:48.222476  399164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3799/.minikube/files for local assets ...
	I0916 12:03:48.222575  399164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem -> 112082.pem in /etc/ssl/certs
	I0916 12:03:48.222689  399164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 12:03:48.230945  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:03:48.253948  399164 start.go:296] duration metric: took 155.597606ms for postStartSetup
	I0916 12:03:48.254043  399164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 12:03:48.254085  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:48.272250  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:48.366746  399164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 12:03:48.371206  399164 fix.go:56] duration metric: took 4.751688089s for fixHost
	I0916 12:03:48.371233  399164 start.go:83] releasing machines lock for "embed-certs-132595", held for 4.751739062s
	I0916 12:03:48.371304  399164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132595
	I0916 12:03:48.389014  399164 ssh_runner.go:195] Run: cat /version.json
	I0916 12:03:48.389026  399164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 12:03:48.389064  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:48.389076  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:48.407268  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:48.408141  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:48.574495  399164 ssh_runner.go:195] Run: systemctl --version
	I0916 12:03:48.579055  399164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 12:03:48.720843  399164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 12:03:48.725713  399164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:03:48.734369  399164 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 12:03:48.734438  399164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 12:03:48.742862  399164 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 12:03:48.742891  399164 start.go:495] detecting cgroup driver to use...
	I0916 12:03:48.742928  399164 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 12:03:48.742971  399164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 12:03:48.755066  399164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 12:03:48.766280  399164 docker.go:217] disabling cri-docker service (if available) ...
	I0916 12:03:48.766332  399164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 12:03:48.778881  399164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 12:03:48.789950  399164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 12:03:48.864470  399164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 12:03:48.950633  399164 docker.go:233] disabling docker service ...
	I0916 12:03:48.950718  399164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 12:03:48.963376  399164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 12:03:48.974743  399164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 12:03:49.051012  399164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 12:03:49.132772  399164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 12:03:49.144695  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 12:03:49.161595  399164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 12:03:49.161655  399164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:03:49.171685  399164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 12:03:49.171752  399164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:03:49.181318  399164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:03:49.191127  399164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:03:49.200504  399164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 12:03:49.209667  399164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:03:49.219975  399164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:03:49.229246  399164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 12:03:49.238834  399164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 12:03:49.246914  399164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 12:03:49.255038  399164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:03:49.341203  399164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 12:03:49.454908  399164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 12:03:49.454985  399164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 12:03:49.458702  399164 start.go:563] Will wait 60s for crictl version
	I0916 12:03:49.458768  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:03:49.462152  399164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 12:03:49.496877  399164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 12:03:49.496963  399164 ssh_runner.go:195] Run: crio --version
	I0916 12:03:49.532320  399164 ssh_runner.go:195] Run: crio --version
	I0916 12:03:49.570125  399164 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0916 12:03:49.571667  399164 cli_runner.go:164] Run: docker network inspect embed-certs-132595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 12:03:49.588657  399164 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 12:03:49.592300  399164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:03:49.602950  399164 kubeadm.go:883] updating cluster {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 12:03:49.603115  399164 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 12:03:49.603168  399164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:03:49.643337  399164 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:03:49.643356  399164 crio.go:433] Images already preloaded, skipping extraction
	I0916 12:03:49.643406  399164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 12:03:49.677810  399164 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 12:03:49.677829  399164 cache_images.go:84] Images are preloaded, skipping loading
	I0916 12:03:49.677837  399164 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 crio true true} ...
	I0916 12:03:49.677960  399164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-132595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 12:03:49.678032  399164 ssh_runner.go:195] Run: crio config
	I0916 12:03:49.720293  399164 cni.go:84] Creating CNI manager for ""
	I0916 12:03:49.720312  399164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 12:03:49.720321  399164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 12:03:49.720340  399164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-132595 NodeName:embed-certs-132595 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 12:03:49.720458  399164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-132595"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 12:03:49.720514  399164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 12:03:49.729736  399164 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 12:03:49.729790  399164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 12:03:49.737954  399164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I0916 12:03:49.755798  399164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 12:03:49.772411  399164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0916 12:03:49.788702  399164 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 12:03:49.792093  399164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 12:03:49.802482  399164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:03:49.879692  399164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:03:49.892833  399164 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595 for IP: 192.168.103.2
	I0916 12:03:49.892856  399164 certs.go:194] generating shared ca certs ...
	I0916 12:03:49.892871  399164 certs.go:226] acquiring lock for ca certs: {Name:mk2374a592a5cdb2c8990e45a864730033f9b5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:03:49.893016  399164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key
	I0916 12:03:49.893054  399164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key
	I0916 12:03:49.893064  399164 certs.go:256] generating profile certs ...
	I0916 12:03:49.893151  399164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/client.key
	I0916 12:03:49.893207  399164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key.6488143d
	I0916 12:03:49.893242  399164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key
	I0916 12:03:49.893366  399164 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem (1338 bytes)
	W0916 12:03:49.893402  399164 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208_empty.pem, impossibly tiny 0 bytes
	I0916 12:03:49.893411  399164 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 12:03:49.893434  399164 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/ca.pem (1082 bytes)
	I0916 12:03:49.893456  399164 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/cert.pem (1123 bytes)
	I0916 12:03:49.893476  399164 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/certs/key.pem (1679 bytes)
	I0916 12:03:49.893514  399164 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem (1708 bytes)
	I0916 12:03:49.894083  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 12:03:49.919468  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 12:03:49.945781  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 12:03:50.003191  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 12:03:50.031214  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 12:03:50.054752  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 12:03:50.108513  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 12:03:50.133302  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/embed-certs-132595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 12:03:50.156190  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/ssl/certs/112082.pem --> /usr/share/ca-certificates/112082.pem (1708 bytes)
	I0916 12:03:50.180193  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 12:03:50.202681  399164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3799/.minikube/certs/11208.pem --> /usr/share/ca-certificates/11208.pem (1338 bytes)
	I0916 12:03:50.226264  399164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 12:03:50.243122  399164 ssh_runner.go:195] Run: openssl version
	I0916 12:03:50.248344  399164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11208.pem && ln -fs /usr/share/ca-certificates/11208.pem /etc/ssl/certs/11208.pem"
	I0916 12:03:50.257535  399164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11208.pem
	I0916 12:03:50.260958  399164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11208.pem
	I0916 12:03:50.261014  399164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11208.pem
	I0916 12:03:50.267679  399164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11208.pem /etc/ssl/certs/51391683.0"
	I0916 12:03:50.277018  399164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112082.pem && ln -fs /usr/share/ca-certificates/112082.pem /etc/ssl/certs/112082.pem"
	I0916 12:03:50.286412  399164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112082.pem
	I0916 12:03:50.289702  399164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112082.pem
	I0916 12:03:50.289759  399164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112082.pem
	I0916 12:03:50.296356  399164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112082.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 12:03:50.305351  399164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 12:03:50.314681  399164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:03:50.318103  399164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:03:50.318168  399164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 12:03:50.324664  399164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 12:03:50.333264  399164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 12:03:50.337070  399164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 12:03:50.343463  399164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 12:03:50.350149  399164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 12:03:50.356501  399164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 12:03:50.363128  399164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 12:03:50.370283  399164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 12:03:50.376818  399164 kubeadm.go:392] StartCluster: {Name:embed-certs-132595 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-132595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 12:03:50.376902  399164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 12:03:50.376943  399164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 12:03:50.411415  399164 cri.go:89] found id: ""
	I0916 12:03:50.411476  399164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 12:03:50.420963  399164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 12:03:50.421006  399164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 12:03:50.421049  399164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 12:03:50.429949  399164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 12:03:50.430702  399164 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-132595" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:03:50.431140  399164 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3799/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-132595" cluster setting kubeconfig missing "embed-certs-132595" context setting]
	I0916 12:03:50.432059  399164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:03:50.434057  399164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 12:03:50.443888  399164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 12:03:50.443922  399164 kubeadm.go:597] duration metric: took 22.910123ms to restartPrimaryControlPlane
	I0916 12:03:50.443935  399164 kubeadm.go:394] duration metric: took 67.122324ms to StartCluster
	I0916 12:03:50.443950  399164 settings.go:142] acquiring lock: {Name:mke577b1a7f09911af300bca49904b50bf9302e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:03:50.444026  399164 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 12:03:50.446057  399164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/kubeconfig: {Name:mk0bd4a8c48947f88af68169b36ee395e0cf7b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 12:03:50.447113  399164 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 12:03:50.447217  399164 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 12:03:50.447317  399164 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132595"
	I0916 12:03:50.447341  399164 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-132595"
	W0916 12:03:50.447353  399164 addons.go:243] addon storage-provisioner should already be in state true
	I0916 12:03:50.447385  399164 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:03:50.447282  399164 config.go:182] Loaded profile config "embed-certs-132595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 12:03:50.447458  399164 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132595"
	I0916 12:03:50.447508  399164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132595"
	I0916 12:03:50.447417  399164 addons.go:69] Setting dashboard=true in profile "embed-certs-132595"
	I0916 12:03:50.447580  399164 addons.go:234] Setting addon dashboard=true in "embed-certs-132595"
	W0916 12:03:50.447598  399164 addons.go:243] addon dashboard should already be in state true
	I0916 12:03:50.447630  399164 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:03:50.447433  399164 addons.go:69] Setting metrics-server=true in profile "embed-certs-132595"
	I0916 12:03:50.447681  399164 addons.go:234] Setting addon metrics-server=true in "embed-certs-132595"
	W0916 12:03:50.447711  399164 addons.go:243] addon metrics-server should already be in state true
	I0916 12:03:50.447741  399164 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:03:50.447846  399164 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:03:50.447892  399164 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:03:50.448160  399164 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:03:50.448267  399164 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:03:50.449036  399164 out.go:177] * Verifying Kubernetes components...
	I0916 12:03:50.450321  399164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 12:03:50.477791  399164 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 12:03:50.477805  399164 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 12:03:50.477833  399164 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 12:03:50.478199  399164 addons.go:234] Setting addon default-storageclass=true in "embed-certs-132595"
	W0916 12:03:50.478218  399164 addons.go:243] addon default-storageclass should already be in state true
	I0916 12:03:50.478246  399164 host.go:66] Checking if "embed-certs-132595" exists ...
	I0916 12:03:50.478707  399164 cli_runner.go:164] Run: docker container inspect embed-certs-132595 --format={{.State.Status}}
	I0916 12:03:50.479528  399164 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 12:03:50.479571  399164 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:03:50.479586  399164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 12:03:50.479581  399164 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 12:03:50.479643  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:50.479649  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:50.481262  399164 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 12:03:50.483316  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 12:03:50.483339  399164 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 12:03:50.483393  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:50.511250  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:50.519410  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:50.531126  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:50.531652  399164 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 12:03:50.531688  399164 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 12:03:50.531756  399164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132595
	I0916 12:03:50.549550  399164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/embed-certs-132595/id_rsa Username:docker}
	I0916 12:03:50.731063  399164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 12:03:50.817822  399164 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:03:50.894103  399164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 12:03:50.895304  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 12:03:50.895326  399164 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 12:03:50.897727  399164 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 12:03:50.897754  399164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 12:03:50.994081  399164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 12:03:50.995808  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 12:03:50.995837  399164 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 12:03:51.002729  399164 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 12:03:51.002836  399164 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 12:03:51.100130  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 12:03:51.100228  399164 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 12:03:51.107644  399164 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 12:03:51.107672  399164 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 12:03:51.203327  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 12:03:51.203418  399164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 12:03:51.212130  399164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 12:03:51.294126  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 12:03:51.294210  399164 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0916 12:03:51.316207  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 12:03:51.316232  399164 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 12:03:51.393995  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 12:03:51.394026  399164 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0916 12:03:51.415351  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 12:03:51.415377  399164 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 12:03:51.432616  399164 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 12:03:51.432650  399164 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 12:03:51.513203  399164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 12:03:53.511418  399164 node_ready.go:49] node "embed-certs-132595" has status "Ready":"True"
	I0916 12:03:53.511455  399164 node_ready.go:38] duration metric: took 2.693594486s for node "embed-certs-132595" to be "Ready" ...
	I0916 12:03:53.511469  399164 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:03:53.618324  399164 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.708987  399164 pod_ready.go:93] pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:53.709078  399164 pod_ready.go:82] duration metric: took 90.721796ms for pod "coredns-7c65d6cfc9-lmhpj" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.709101  399164 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.720240  399164 pod_ready.go:93] pod "etcd-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:53.720266  399164 pod_ready.go:82] duration metric: took 11.155055ms for pod "etcd-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.720285  399164 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.804650  399164 pod_ready.go:93] pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:53.804679  399164 pod_ready.go:82] duration metric: took 84.382169ms for pod "kube-apiserver-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.804696  399164 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.820698  399164 pod_ready.go:93] pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:53.820722  399164 pod_ready.go:82] duration metric: took 16.002327ms for pod "kube-controller-manager-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.820735  399164 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.826095  399164 pod_ready.go:93] pod "kube-proxy-5jjq9" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:53.826119  399164 pod_ready.go:82] duration metric: took 5.375549ms for pod "kube-proxy-5jjq9" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:53.826135  399164 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:54.114568  399164 pod_ready.go:93] pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace has status "Ready":"True"
	I0916 12:03:54.114597  399164 pod_ready.go:82] duration metric: took 288.453862ms for pod "kube-scheduler-embed-certs-132595" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:54.114612  399164 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace to be "Ready" ...
	I0916 12:03:55.494009  399164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.499888163s)
	I0916 12:03:55.494018  399164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.599858266s)
	I0916 12:03:55.600940  399164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.388762236s)
	I0916 12:03:55.600990  399164 addons.go:475] Verifying addon metrics-server=true in "embed-certs-132595"
	I0916 12:03:55.655232  399164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.141971986s)
	I0916 12:03:55.657269  399164 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-132595 addons enable metrics-server
	
	I0916 12:03:55.658947  399164 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0916 12:03:55.660499  399164 addons.go:510] duration metric: took 5.213284261s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0916 12:03:56.120171  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:03:58.121399  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:00.621919  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:03.121104  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:05.619893  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:07.620761  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:10.120715  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:12.620148  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:14.621567  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:17.120552  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:19.120737  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:21.619941  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:23.621127  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:26.120150  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:28.121129  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:30.620895  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:33.120641  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:35.621740  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:38.120712  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:40.620231  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:43.121640  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:45.619872  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:47.620939  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:50.121122  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:52.620184  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:55.120736  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:57.620446  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:04:59.621221  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:02.120343  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:04.120413  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:06.121108  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:08.121705  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:10.620632  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:12.620994  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:14.621247  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:16.622857  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:19.121107  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:21.621043  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:24.120537  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:26.120809  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:28.619971  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:30.621055  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:33.120359  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:35.120412  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:37.120981  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:39.121023  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:41.620432  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:43.620906  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:46.120727  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:48.134538  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:50.620652  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:53.121409  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:55.620888  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:05:58.122874  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:00.620470  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:02.620954  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:05.120416  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:07.121200  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:09.620356  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:12.121455  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:14.620230  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:16.621098  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:19.119994  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:21.120506  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:23.121479  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:25.621123  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:28.120944  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:30.620671  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:33.120336  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:35.120821  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:37.620089  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:39.620272  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:41.621122  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:43.621192  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:46.120260  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:48.120617  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:50.121615  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:52.620041  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:54.620910  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:56.621008  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:06:59.120063  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:01.120688  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:03.120847  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:05.620738  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:08.121059  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:10.620398  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:12.620625  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:14.620681  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:16.621209  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:19.120396  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:21.620924  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:24.121371  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:26.620492  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:28.620796  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:31.120802  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:33.620225  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:35.620439  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:38.120580  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:40.620650  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:42.620687  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:45.120546  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:47.120688  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:49.620328  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:51.620469  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:53.620778  399164 pod_ready.go:103] pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace has status "Ready":"False"
	I0916 12:07:54.120481  399164 pod_ready.go:82] duration metric: took 4m0.005849458s for pod "metrics-server-6867b74b74-rhxfx" in "kube-system" namespace to be "Ready" ...
	E0916 12:07:54.120513  399164 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 12:07:54.120524  399164 pod_ready.go:39] duration metric: took 4m0.609043462s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 12:07:54.120545  399164 api_server.go:52] waiting for apiserver process to appear ...
	I0916 12:07:54.120612  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 12:07:54.120678  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 12:07:54.156095  399164 cri.go:89] found id: "d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227"
	I0916 12:07:54.156125  399164 cri.go:89] found id: ""
	I0916 12:07:54.156136  399164 logs.go:276] 1 containers: [d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227]
	I0916 12:07:54.156192  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.159815  399164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 12:07:54.159898  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 12:07:54.193988  399164 cri.go:89] found id: "563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be"
	I0916 12:07:54.194014  399164 cri.go:89] found id: ""
	I0916 12:07:54.194024  399164 logs.go:276] 1 containers: [563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be]
	I0916 12:07:54.194074  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.197516  399164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 12:07:54.197580  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 12:07:54.231427  399164 cri.go:89] found id: "2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee"
	I0916 12:07:54.231465  399164 cri.go:89] found id: ""
	I0916 12:07:54.231477  399164 logs.go:276] 1 containers: [2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee]
	I0916 12:07:54.231531  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.235037  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 12:07:54.235096  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 12:07:54.268534  399164 cri.go:89] found id: "25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5"
	I0916 12:07:54.268582  399164 cri.go:89] found id: ""
	I0916 12:07:54.268593  399164 logs.go:276] 1 containers: [25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5]
	I0916 12:07:54.268646  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.272246  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 12:07:54.272303  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 12:07:54.306940  399164 cri.go:89] found id: "bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3"
	I0916 12:07:54.306963  399164 cri.go:89] found id: ""
	I0916 12:07:54.306972  399164 logs.go:276] 1 containers: [bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3]
	I0916 12:07:54.307031  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.311054  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 12:07:54.311128  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 12:07:54.347182  399164 cri.go:89] found id: "2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749"
	I0916 12:07:54.347207  399164 cri.go:89] found id: ""
	I0916 12:07:54.347215  399164 logs.go:276] 1 containers: [2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749]
	I0916 12:07:54.347258  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.350879  399164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 12:07:54.350961  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 12:07:54.386175  399164 cri.go:89] found id: "3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351"
	I0916 12:07:54.386197  399164 cri.go:89] found id: ""
	I0916 12:07:54.386205  399164 logs.go:276] 1 containers: [3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351]
	I0916 12:07:54.386253  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.389694  399164 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 12:07:54.389752  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 12:07:54.424628  399164 cri.go:89] found id: "36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488"
	I0916 12:07:54.424652  399164 cri.go:89] found id: "f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104"
	I0916 12:07:54.424658  399164 cri.go:89] found id: ""
	I0916 12:07:54.424667  399164 logs.go:276] 2 containers: [36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488 f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104]
	I0916 12:07:54.424717  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.428129  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.431278  399164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 12:07:54.431342  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 12:07:54.466027  399164 cri.go:89] found id: "85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34"
	I0916 12:07:54.466050  399164 cri.go:89] found id: ""
	I0916 12:07:54.466058  399164 logs.go:276] 1 containers: [85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34]
	I0916 12:07:54.466105  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:54.469963  399164 logs.go:123] Gathering logs for kube-proxy [bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3] ...
	I0916 12:07:54.469988  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3"
	I0916 12:07:54.504538  399164 logs.go:123] Gathering logs for kube-controller-manager [2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749] ...
	I0916 12:07:54.504566  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749"
	I0916 12:07:54.556893  399164 logs.go:123] Gathering logs for describe nodes ...
	I0916 12:07:54.556926  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 12:07:54.652443  399164 logs.go:123] Gathering logs for coredns [2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee] ...
	I0916 12:07:54.652478  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee"
	I0916 12:07:54.688152  399164 logs.go:123] Gathering logs for storage-provisioner [f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104] ...
	I0916 12:07:54.688181  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104"
	I0916 12:07:54.724484  399164 logs.go:123] Gathering logs for dmesg ...
	I0916 12:07:54.724515  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 12:07:54.749119  399164 logs.go:123] Gathering logs for container status ...
	I0916 12:07:54.749151  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 12:07:54.787757  399164 logs.go:123] Gathering logs for kube-apiserver [d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227] ...
	I0916 12:07:54.787787  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227"
	I0916 12:07:54.830934  399164 logs.go:123] Gathering logs for etcd [563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be] ...
	I0916 12:07:54.830966  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be"
	I0916 12:07:54.871819  399164 logs.go:123] Gathering logs for kube-scheduler [25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5] ...
	I0916 12:07:54.871861  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5"
	I0916 12:07:54.908026  399164 logs.go:123] Gathering logs for kindnet [3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351] ...
	I0916 12:07:54.908148  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351"
	I0916 12:07:54.946016  399164 logs.go:123] Gathering logs for storage-provisioner [36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488] ...
	I0916 12:07:54.946046  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488"
	I0916 12:07:54.980766  399164 logs.go:123] Gathering logs for kubernetes-dashboard [85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34] ...
	I0916 12:07:54.980796  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34"
	I0916 12:07:55.014878  399164 logs.go:123] Gathering logs for CRI-O ...
	I0916 12:07:55.014905  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 12:07:55.080342  399164 logs.go:123] Gathering logs for kubelet ...
	I0916 12:07:55.080410  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 12:07:57.649929  399164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 12:07:57.661580  399164 api_server.go:72] duration metric: took 4m7.21442723s to wait for apiserver process to appear ...
	I0916 12:07:57.661638  399164 api_server.go:88] waiting for apiserver healthz status ...
	I0916 12:07:57.661694  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 12:07:57.661750  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 12:07:57.695451  399164 cri.go:89] found id: "d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227"
	I0916 12:07:57.695480  399164 cri.go:89] found id: ""
	I0916 12:07:57.695489  399164 logs.go:276] 1 containers: [d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227]
	I0916 12:07:57.695556  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.699382  399164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 12:07:57.699456  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 12:07:57.734442  399164 cri.go:89] found id: "563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be"
	I0916 12:07:57.734462  399164 cri.go:89] found id: ""
	I0916 12:07:57.734469  399164 logs.go:276] 1 containers: [563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be]
	I0916 12:07:57.734517  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.738020  399164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 12:07:57.738084  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 12:07:57.772017  399164 cri.go:89] found id: "2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee"
	I0916 12:07:57.772035  399164 cri.go:89] found id: ""
	I0916 12:07:57.772042  399164 logs.go:276] 1 containers: [2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee]
	I0916 12:07:57.772081  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.775516  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 12:07:57.775577  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 12:07:57.810785  399164 cri.go:89] found id: "25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5"
	I0916 12:07:57.810815  399164 cri.go:89] found id: ""
	I0916 12:07:57.810822  399164 logs.go:276] 1 containers: [25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5]
	I0916 12:07:57.810867  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.814821  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 12:07:57.814903  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 12:07:57.849281  399164 cri.go:89] found id: "bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3"
	I0916 12:07:57.849304  399164 cri.go:89] found id: ""
	I0916 12:07:57.849313  399164 logs.go:276] 1 containers: [bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3]
	I0916 12:07:57.849394  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.852895  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 12:07:57.852961  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 12:07:57.889080  399164 cri.go:89] found id: "2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749"
	I0916 12:07:57.889104  399164 cri.go:89] found id: ""
	I0916 12:07:57.889111  399164 logs.go:276] 1 containers: [2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749]
	I0916 12:07:57.889162  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.892706  399164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 12:07:57.892769  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 12:07:57.926858  399164 cri.go:89] found id: "3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351"
	I0916 12:07:57.926883  399164 cri.go:89] found id: ""
	I0916 12:07:57.926891  399164 logs.go:276] 1 containers: [3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351]
	I0916 12:07:57.926932  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.930550  399164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 12:07:57.930610  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 12:07:57.964020  399164 cri.go:89] found id: "85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34"
	I0916 12:07:57.964040  399164 cri.go:89] found id: ""
	I0916 12:07:57.964049  399164 logs.go:276] 1 containers: [85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34]
	I0916 12:07:57.964102  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:57.967763  399164 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 12:07:57.967828  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 12:07:58.001717  399164 cri.go:89] found id: "36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488"
	I0916 12:07:58.001746  399164 cri.go:89] found id: "f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104"
	I0916 12:07:58.001753  399164 cri.go:89] found id: ""
	I0916 12:07:58.001763  399164 logs.go:276] 2 containers: [36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488 f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104]
	I0916 12:07:58.001828  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:58.005308  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:07:58.008749  399164 logs.go:123] Gathering logs for storage-provisioner [36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488] ...
	I0916 12:07:58.008773  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488"
	I0916 12:07:58.044676  399164 logs.go:123] Gathering logs for etcd [563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be] ...
	I0916 12:07:58.044703  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be"
	I0916 12:07:58.084352  399164 logs.go:123] Gathering logs for describe nodes ...
	I0916 12:07:58.084381  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 12:07:58.180547  399164 logs.go:123] Gathering logs for kube-scheduler [25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5] ...
	I0916 12:07:58.180583  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5"
	I0916 12:07:58.215326  399164 logs.go:123] Gathering logs for kube-controller-manager [2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749] ...
	I0916 12:07:58.215355  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749"
	I0916 12:07:58.267092  399164 logs.go:123] Gathering logs for storage-provisioner [f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104] ...
	I0916 12:07:58.267125  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104"
	I0916 12:07:58.302094  399164 logs.go:123] Gathering logs for CRI-O ...
	I0916 12:07:58.302118  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 12:07:58.362524  399164 logs.go:123] Gathering logs for kubelet ...
	I0916 12:07:58.362556  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 12:07:58.432631  399164 logs.go:123] Gathering logs for kindnet [3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351] ...
	I0916 12:07:58.432671  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351"
	I0916 12:07:58.471683  399164 logs.go:123] Gathering logs for container status ...
	I0916 12:07:58.471714  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 12:07:58.511239  399164 logs.go:123] Gathering logs for coredns [2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee] ...
	I0916 12:07:58.511266  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee"
	I0916 12:07:58.547032  399164 logs.go:123] Gathering logs for kube-apiserver [d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227] ...
	I0916 12:07:58.547062  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227"
	I0916 12:07:58.588860  399164 logs.go:123] Gathering logs for kube-proxy [bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3] ...
	I0916 12:07:58.588899  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3"
	I0916 12:07:58.624232  399164 logs.go:123] Gathering logs for kubernetes-dashboard [85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34] ...
	I0916 12:07:58.624259  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34"
	I0916 12:07:58.658327  399164 logs.go:123] Gathering logs for dmesg ...
	I0916 12:07:58.658357  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 12:08:01.184164  399164 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 12:08:01.188714  399164 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 12:08:01.189622  399164 api_server.go:141] control plane version: v1.31.1
	I0916 12:08:01.189643  399164 api_server.go:131] duration metric: took 3.527998331s to wait for apiserver health ...
	I0916 12:08:01.189651  399164 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 12:08:01.189675  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 12:08:01.189727  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 12:08:01.223373  399164 cri.go:89] found id: "d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227"
	I0916 12:08:01.223395  399164 cri.go:89] found id: ""
	I0916 12:08:01.223403  399164 logs.go:276] 1 containers: [d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227]
	I0916 12:08:01.223444  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.227006  399164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 12:08:01.227074  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 12:08:01.261570  399164 cri.go:89] found id: "563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be"
	I0916 12:08:01.261616  399164 cri.go:89] found id: ""
	I0916 12:08:01.261628  399164 logs.go:276] 1 containers: [563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be]
	I0916 12:08:01.261684  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.265132  399164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 12:08:01.265197  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 12:08:01.299851  399164 cri.go:89] found id: "2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee"
	I0916 12:08:01.299875  399164 cri.go:89] found id: ""
	I0916 12:08:01.299883  399164 logs.go:276] 1 containers: [2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee]
	I0916 12:08:01.299931  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.303558  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 12:08:01.303628  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 12:08:01.338955  399164 cri.go:89] found id: "25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5"
	I0916 12:08:01.338976  399164 cri.go:89] found id: ""
	I0916 12:08:01.338985  399164 logs.go:276] 1 containers: [25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5]
	I0916 12:08:01.339041  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.342597  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 12:08:01.342678  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 12:08:01.376187  399164 cri.go:89] found id: "bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3"
	I0916 12:08:01.376209  399164 cri.go:89] found id: ""
	I0916 12:08:01.376220  399164 logs.go:276] 1 containers: [bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3]
	I0916 12:08:01.376269  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.379684  399164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 12:08:01.379744  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 12:08:01.414196  399164 cri.go:89] found id: "2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749"
	I0916 12:08:01.414216  399164 cri.go:89] found id: ""
	I0916 12:08:01.414223  399164 logs.go:276] 1 containers: [2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749]
	I0916 12:08:01.414267  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.417803  399164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 12:08:01.417869  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 12:08:01.453415  399164 cri.go:89] found id: "3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351"
	I0916 12:08:01.453437  399164 cri.go:89] found id: ""
	I0916 12:08:01.453445  399164 logs.go:276] 1 containers: [3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351]
	I0916 12:08:01.453499  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.457012  399164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 12:08:01.457072  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 12:08:01.492157  399164 cri.go:89] found id: "85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34"
	I0916 12:08:01.492179  399164 cri.go:89] found id: ""
	I0916 12:08:01.492188  399164 logs.go:276] 1 containers: [85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34]
	I0916 12:08:01.492242  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.496104  399164 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0916 12:08:01.496189  399164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 12:08:01.531641  399164 cri.go:89] found id: "36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488"
	I0916 12:08:01.531670  399164 cri.go:89] found id: "f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104"
	I0916 12:08:01.531675  399164 cri.go:89] found id: ""
	I0916 12:08:01.531684  399164 logs.go:276] 2 containers: [36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488 f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104]
	I0916 12:08:01.531728  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.535285  399164 ssh_runner.go:195] Run: which crictl
	I0916 12:08:01.538706  399164 logs.go:123] Gathering logs for kube-scheduler [25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5] ...
	I0916 12:08:01.538726  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5"
	I0916 12:08:01.573485  399164 logs.go:123] Gathering logs for kubernetes-dashboard [85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34] ...
	I0916 12:08:01.573511  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34"
	I0916 12:08:01.608574  399164 logs.go:123] Gathering logs for container status ...
	I0916 12:08:01.608619  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 12:08:01.649881  399164 logs.go:123] Gathering logs for describe nodes ...
	I0916 12:08:01.649911  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 12:08:01.750496  399164 logs.go:123] Gathering logs for coredns [2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee] ...
	I0916 12:08:01.750526  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee"
	I0916 12:08:01.786061  399164 logs.go:123] Gathering logs for kube-controller-manager [2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749] ...
	I0916 12:08:01.786093  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749"
	I0916 12:08:01.840081  399164 logs.go:123] Gathering logs for CRI-O ...
	I0916 12:08:01.840122  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 12:08:01.902880  399164 logs.go:123] Gathering logs for dmesg ...
	I0916 12:08:01.902916  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 12:08:01.928088  399164 logs.go:123] Gathering logs for kube-apiserver [d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227] ...
	I0916 12:08:01.928126  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227"
	I0916 12:08:01.972117  399164 logs.go:123] Gathering logs for etcd [563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be] ...
	I0916 12:08:01.972163  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be"
	I0916 12:08:02.011130  399164 logs.go:123] Gathering logs for storage-provisioner [36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488] ...
	I0916 12:08:02.011159  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488"
	I0916 12:08:02.045760  399164 logs.go:123] Gathering logs for storage-provisioner [f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104] ...
	I0916 12:08:02.045794  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104"
	I0916 12:08:02.079144  399164 logs.go:123] Gathering logs for kubelet ...
	I0916 12:08:02.079168  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 12:08:02.148057  399164 logs.go:123] Gathering logs for kube-proxy [bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3] ...
	I0916 12:08:02.148093  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3"
	I0916 12:08:02.183301  399164 logs.go:123] Gathering logs for kindnet [3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351] ...
	I0916 12:08:02.183330  399164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351"
	I0916 12:08:04.731412  399164 system_pods.go:59] 9 kube-system pods found
	I0916 12:08:04.731450  399164 system_pods.go:61] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:08:04.731456  399164 system_pods.go:61] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:08:04.731461  399164 system_pods.go:61] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:08:04.731465  399164 system_pods.go:61] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:08:04.731469  399164 system_pods.go:61] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:08:04.731475  399164 system_pods.go:61] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:08:04.731479  399164 system_pods.go:61] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:08:04.731485  399164 system_pods.go:61] "metrics-server-6867b74b74-rhxfx" [1f7ed956-692d-4b25-9cbf-8f79cf304d25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 12:08:04.731493  399164 system_pods.go:61] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:08:04.731501  399164 system_pods.go:74] duration metric: took 3.541844674s to wait for pod list to return data ...
	I0916 12:08:04.731509  399164 default_sa.go:34] waiting for default service account to be created ...
	I0916 12:08:04.733997  399164 default_sa.go:45] found service account: "default"
	I0916 12:08:04.734018  399164 default_sa.go:55] duration metric: took 2.500735ms for default service account to be created ...
	I0916 12:08:04.734026  399164 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 12:08:04.738565  399164 system_pods.go:86] 9 kube-system pods found
	I0916 12:08:04.738588  399164 system_pods.go:89] "coredns-7c65d6cfc9-lmhpj" [dec7e28f-bb5b-4238-abf8-a17607466015] Running
	I0916 12:08:04.738597  399164 system_pods.go:89] "etcd-embed-certs-132595" [a0b7465f-7b8a-4c03-9c7b-9aba551d7d98] Running
	I0916 12:08:04.738601  399164 system_pods.go:89] "kindnet-s4vkq" [8a7383ab-18b0-4118-9810-ff1cbbdd9ecf] Running
	I0916 12:08:04.738605  399164 system_pods.go:89] "kube-apiserver-embed-certs-132595" [8df2452b-d2dc-44af-86cb-75d1fb8a71d5] Running
	I0916 12:08:04.738610  399164 system_pods.go:89] "kube-controller-manager-embed-certs-132595" [673d272a-803b-45a5-81e7-ba32ff89ec4f] Running
	I0916 12:08:04.738613  399164 system_pods.go:89] "kube-proxy-5jjq9" [da63c6b0-19b1-4ab0-abc4-ac2b785e8e88] Running
	I0916 12:08:04.738617  399164 system_pods.go:89] "kube-scheduler-embed-certs-132595" [b8f3262f-ab89-4efd-8ec2-bcea70ce3c3f] Running
	I0916 12:08:04.738623  399164 system_pods.go:89] "metrics-server-6867b74b74-rhxfx" [1f7ed956-692d-4b25-9cbf-8f79cf304d25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 12:08:04.738627  399164 system_pods.go:89] "storage-provisioner" [b94fecd1-4b72-474b-9296-fb5c86912f64] Running
	I0916 12:08:04.738635  399164 system_pods.go:126] duration metric: took 4.603384ms to wait for k8s-apps to be running ...
	I0916 12:08:04.738644  399164 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 12:08:04.738686  399164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 12:08:04.750479  399164 system_svc.go:56] duration metric: took 11.824517ms WaitForService to wait for kubelet
	I0916 12:08:04.750506  399164 kubeadm.go:582] duration metric: took 4m14.30336032s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 12:08:04.750523  399164 node_conditions.go:102] verifying NodePressure condition ...
	I0916 12:08:04.753081  399164 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 12:08:04.753104  399164 node_conditions.go:123] node cpu capacity is 8
	I0916 12:08:04.753115  399164 node_conditions.go:105] duration metric: took 2.588014ms to run NodePressure ...
	I0916 12:08:04.753127  399164 start.go:241] waiting for startup goroutines ...
	I0916 12:08:04.753136  399164 start.go:246] waiting for cluster config update ...
	I0916 12:08:04.753153  399164 start.go:255] writing updated cluster config ...
	I0916 12:08:04.753470  399164 ssh_runner.go:195] Run: rm -f paused
	I0916 12:08:04.759559  399164 out.go:177] * Done! kubectl is now configured to use "embed-certs-132595" cluster and "default" namespace by default
	E0916 12:08:04.760901  399164 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 12:06:47 embed-certs-132595 crio[662]: time="2024-09-16 12:06:47.042338172Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.014750439Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=86c2ce61-a99c-4bdb-80c1-001c24aab715 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.014974138Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=86c2ce61-a99c-4bdb-80c1-001c24aab715 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.015814704Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8f074c81-9bab-4be5-b630-2029143c015f name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.016018306Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8f074c81-9bab-4be5-b630-2029143c015f name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.016779409Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494/dashboard-metrics-scraper" id=a00b7be5-4a3d-4a31-a39c-9c4660bd859b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.016881292Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.069382825Z" level=info msg="Created container 44741d025e5263999f5f8c9432acd71b5a26fcf12bf28cc7832800f946d89504: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494/dashboard-metrics-scraper" id=a00b7be5-4a3d-4a31-a39c-9c4660bd859b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.070050415Z" level=info msg="Starting container: 44741d025e5263999f5f8c9432acd71b5a26fcf12bf28cc7832800f946d89504" id=c2af23d3-e8ac-4ae7-b60e-84eb40d5df1d name=/runtime.v1.RuntimeService/StartContainer
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.075401328Z" level=info msg="Started container" PID=2356 containerID=44741d025e5263999f5f8c9432acd71b5a26fcf12bf28cc7832800f946d89504 description=kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494/dashboard-metrics-scraper id=c2af23d3-e8ac-4ae7-b60e-84eb40d5df1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc04c41c4f57182cbde9a5fbc41018b3e79ae46d148aaf739be4bb3949743700
	Sep 16 12:06:53 embed-certs-132595 conmon[2344]: conmon 44741d025e5263999f5f <ninfo>: container 2356 exited with status 1
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.574709625Z" level=info msg="Removing container: 7570963fc24fd222ff29ec50727c6146511a427383acc84ecaab6f7f73c922ba" id=72c7d176-a92e-475f-a7de-f364d06b5543 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 12:06:53 embed-certs-132595 crio[662]: time="2024-09-16 12:06:53.587483606Z" level=info msg="Removed container 7570963fc24fd222ff29ec50727c6146511a427383acc84ecaab6f7f73c922ba: kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494/dashboard-metrics-scraper" id=72c7d176-a92e-475f-a7de-f364d06b5543 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 12:07:02 embed-certs-132595 crio[662]: time="2024-09-16 12:07:02.013751331Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=463f3ffb-319e-4366-946e-829345e899cc name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:02 embed-certs-132595 crio[662]: time="2024-09-16 12:07:02.014051717Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=463f3ffb-319e-4366-946e-829345e899cc name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:15 embed-certs-132595 crio[662]: time="2024-09-16 12:07:15.014404985Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6d5e60b7-dde3-45b2-a8d0-06538a8c443f name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:15 embed-certs-132595 crio[662]: time="2024-09-16 12:07:15.014730410Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6d5e60b7-dde3-45b2-a8d0-06538a8c443f name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:30 embed-certs-132595 crio[662]: time="2024-09-16 12:07:30.014483407Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e7530730-933c-4f25-b827-3ae69d5a1d41 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:30 embed-certs-132595 crio[662]: time="2024-09-16 12:07:30.014794249Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e7530730-933c-4f25-b827-3ae69d5a1d41 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:41 embed-certs-132595 crio[662]: time="2024-09-16 12:07:41.014091149Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d10fbeb6-cf4f-445c-a1a8-17da1ed1d802 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:41 embed-certs-132595 crio[662]: time="2024-09-16 12:07:41.014322715Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d10fbeb6-cf4f-445c-a1a8-17da1ed1d802 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:53 embed-certs-132595 crio[662]: time="2024-09-16 12:07:53.014222612Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=687adbf2-2702-4545-a469-858cf8e05ece name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:07:53 embed-certs-132595 crio[662]: time="2024-09-16 12:07:53.014432020Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=687adbf2-2702-4545-a469-858cf8e05ece name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:08:05 embed-certs-132595 crio[662]: time="2024-09-16 12:08:05.014805778Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5c8d8ea6-43a8-43b1-b318-1a71d0b536f4 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 12:08:05 embed-certs-132595 crio[662]: time="2024-09-16 12:08:05.015140815Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5c8d8ea6-43a8-43b1-b318-1a71d0b536f4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	44741d025e526       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           About a minute ago   Exited              dashboard-metrics-scraper   5                   bc04c41c4f571       dashboard-metrics-scraper-7c96f5b85b-8b494
	36129a02061e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           3 minutes ago        Running             storage-provisioner         2                   cffc97cfdc895       storage-provisioner
	85af30dcd81b2       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   4 minutes ago        Running             kubernetes-dashboard        0                   3403fbeb47a28       kubernetes-dashboard-695b96c756-x2xqb
	2291346455e83       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           4 minutes ago        Running             coredns                     1                   77298c64c3586       coredns-7c65d6cfc9-lmhpj
	3be21e3ce716d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                           4 minutes ago        Running             kindnet-cni                 1                   28d0c27612789       kindnet-s4vkq
	f26a86dedad23       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           4 minutes ago        Exited              storage-provisioner         1                   cffc97cfdc895       storage-provisioner
	bc5b82c0c6904       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                           4 minutes ago        Running             kube-proxy                  1                   36297582811bd       kube-proxy-5jjq9
	25c574220d35e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                           4 minutes ago        Running             kube-scheduler              1                   d93eac9dc98e7       kube-scheduler-embed-certs-132595
	2d30a88cdf4ef       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                           4 minutes ago        Running             kube-controller-manager     1                   48fefb716ca64       kube-controller-manager-embed-certs-132595
	d5ec3cb947232       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                           4 minutes ago        Running             kube-apiserver              1                   922960e29f480       kube-apiserver-embed-certs-132595
	563d413f204fe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                           4 minutes ago        Running             etcd                        1                   4724aacc72a95       etcd-embed-certs-132595
	
	
	==> coredns [2291346455e8349d59451afedbbb89abf192f57aa6a81172fad3b56b27e05bee] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51870 - 57913 "HINFO IN 1485393003917888263.6671959073537653179. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00965392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[197111291]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 12:03:55.211) (total time: 30001ms):
	Trace[197111291]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:04:25.212)
	Trace[197111291]: [30.001046546s] [30.001046546s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1664511889]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 12:03:55.211) (total time: 30000ms):
	Trace[1664511889]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:04:25.212)
	Trace[1664511889]: [30.000857915s] [30.000857915s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[729091977]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 12:03:55.211) (total time: 30001ms):
	Trace[729091977]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:04:25.212)
	Trace[729091977]: [30.001249312s] [30.001249312s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               embed-certs-132595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-132595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-132595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T12_02_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 12:02:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-132595
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 12:08:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 12:04:24 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 12:04:24 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 12:04:24 +0000   Mon, 16 Sep 2024 12:02:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 12:04:24 +0000   Mon, 16 Sep 2024 12:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-132595
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc17b342b8c4678afe3a07284287afc
	  System UUID:                ac9bc1b7-26e7-4faa-ad97-c61b5564343d
	  Boot ID:                    a010aa60-610e-44b7-a4b8-c05f29205fcf
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-lmhpj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m31s
	  kube-system                 etcd-embed-certs-132595                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m36s
	  kube-system                 kindnet-s4vkq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m31s
	  kube-system                 kube-apiserver-embed-certs-132595             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-embed-certs-132595    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-proxy-5jjq9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-scheduler-embed-certs-132595             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 metrics-server-6867b74b74-rhxfx               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m42s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-7c96f5b85b-8b494    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-x2xqb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m29s                  kube-proxy       
	  Normal   Starting                 4m21s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m42s (x7 over 5m42s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m36s                  kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 5m36s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m36s                  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m36s                  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m31s                  node-controller  Node embed-certs-132595 event: Registered Node embed-certs-132595 in Controller
	  Normal   NodeReady                4m49s                  kubelet          Node embed-certs-132595 status is now: NodeReady
	  Normal   Starting                 4m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m28s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node embed-certs-132595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node embed-certs-132595 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m20s                  node-controller  Node embed-certs-132595 event: Registered Node embed-certs-132595 in Controller
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +1.029111] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000005] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000002] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +2.015834] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000008] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000006] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000002] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[Sep16 12:04] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000005] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000002] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.004028] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000010] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +8.187386] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000006] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000001] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	[  +0.004010] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-2bfc3c9091b0
	[  +0.000005] ll header: 00000000: 02 42 5b 77 dd a5 02 42 c0 a8 67 02 08 00
	
	
	==> etcd [563d413f204fef5b1292188b89522cadf64928b89289777ab7f40a7c3ce7b0be] <==
	{"level":"info","ts":"2024-09-16T12:03:51.105149Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T12:03:51.104945Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2024-09-16T12:03:51.105453Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:03:51.105600Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T12:03:51.107143Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T12:03:51.107427Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T12:03:51.107493Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T12:03:51.107620Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:03:51.107660Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-09-16T12:03:52.206640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T12:03:52.206686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T12:03:52.206715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2024-09-16T12:03:52.206729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T12:03:52.206735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2024-09-16T12:03:52.206743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T12:03:52.206751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2024-09-16T12:03:52.208533Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:03:52.208550Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T12:03:52.208541Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-132595 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T12:03:52.208711Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T12:03:52.208737Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T12:03:52.209661Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:03:52.209917Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T12:03:52.210608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2024-09-16T12:03:52.210992Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:08:17 up  1:50,  0 users,  load average: 0.23, 0.69, 0.84
	Linux embed-certs-132595 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3be21e3ce716de27d9210785411000c377ac7328e001ba99414506c6032a5351] <==
	I0916 12:06:15.625498       1 main.go:299] handling current node
	I0916 12:06:25.621474       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:06:25.621512       1 main.go:299] handling current node
	I0916 12:06:35.621556       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:06:35.621621       1 main.go:299] handling current node
	I0916 12:06:45.625906       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:06:45.625940       1 main.go:299] handling current node
	I0916 12:06:55.618460       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:06:55.618492       1 main.go:299] handling current node
	I0916 12:07:05.621431       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:07:05.621482       1 main.go:299] handling current node
	I0916 12:07:15.619434       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:07:15.619500       1 main.go:299] handling current node
	I0916 12:07:25.625412       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:07:25.625459       1 main.go:299] handling current node
	I0916 12:07:35.622375       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:07:35.622419       1 main.go:299] handling current node
	I0916 12:07:45.618341       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:07:45.618384       1 main.go:299] handling current node
	I0916 12:07:55.618523       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:07:55.618562       1 main.go:299] handling current node
	I0916 12:08:05.621415       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:08:05.621458       1 main.go:299] handling current node
	I0916 12:08:15.626213       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 12:08:15.626252       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d5ec3cb9472327ce3dd89489838e2d8c0f669b47914e0e7094489a7090828227] <==
	I0916 12:03:55.599507       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 12:03:55.637537       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.192.37"}
	I0916 12:03:55.650401       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.239.59"}
	I0916 12:03:58.123955       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 12:03:58.272583       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 12:03:58.323321       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 12:03:58.323321       1 controller.go:615] quota admission added evaluator for: endpoints
	W0916 12:04:54.607862       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 12:04:54.607862       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 12:04:54.607934       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 12:04:54.607969       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 12:04:54.609094       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 12:04:54.609121       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 12:06:54.609359       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 12:06:54.609361       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 12:06:54.609431       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 12:06:54.609516       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 12:06:54.610554       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 12:06:54.610617       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2d30a88cdf4efdc04337f999f69eea76952e61d702ef722717097d388f0e1749] <==
	I0916 12:04:44.023742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="66.397µs"
	I0916 12:04:46.222879       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="77.958µs"
	E0916 12:04:57.987398       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:04:58.420956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 12:04:59.024126       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="94.999µs"
	E0916 12:05:27.993014       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:05:28.427924       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 12:05:29.424583       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="95.459µs"
	I0916 12:05:36.221757       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="70.243µs"
	I0916 12:05:37.023336       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="72.345µs"
	I0916 12:05:52.024887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="78.788µs"
	E0916 12:05:57.998978       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:05:58.435038       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 12:06:28.004551       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:06:28.441995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 12:06:53.584723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="54.67µs"
	I0916 12:06:56.220954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="88.79µs"
	E0916 12:06:58.010034       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:06:58.449492       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 12:07:02.024320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="73.816µs"
	I0916 12:07:15.024638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="96.651µs"
	E0916 12:07:28.015345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:07:28.457537       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 12:07:58.021474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 12:07:58.466242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bc5b82c0c6904bf9a0a21762f1da7853ccf2cfca0aecc80f586bd0926ab908c3] <==
	I0916 12:03:55.204308       1 server_linux.go:66] "Using iptables proxy"
	I0916 12:03:55.500512       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0916 12:03:55.500591       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 12:03:55.593689       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 12:03:55.593875       1 server_linux.go:169] "Using iptables Proxier"
	I0916 12:03:55.596931       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 12:03:55.597380       1 server.go:483] "Version info" version="v1.31.1"
	I0916 12:03:55.597473       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 12:03:55.598759       1 config.go:105] "Starting endpoint slice config controller"
	I0916 12:03:55.598804       1 config.go:199] "Starting service config controller"
	I0916 12:03:55.598846       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 12:03:55.598895       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 12:03:55.598971       1 config.go:328] "Starting node config controller"
	I0916 12:03:55.598986       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 12:03:55.700041       1 shared_informer.go:320] Caches are synced for node config
	I0916 12:03:55.700098       1 shared_informer.go:320] Caches are synced for service config
	I0916 12:03:55.700114       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [25c574220d35e1ad6a109c7ae354f5d5204defb472cc43150baeb2bcad5665f5] <==
	I0916 12:03:52.222703       1 serving.go:386] Generated self-signed cert in-memory
	W0916 12:03:53.504242       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 12:03:53.504406       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 12:03:53.504452       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 12:03:53.504500       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 12:03:53.694648       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 12:03:53.694697       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 12:03:53.698085       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 12:03:53.698227       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 12:03:53.698233       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 12:03:53.698324       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 12:03:53.802200       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 12:07:15 embed-certs-132595 kubelet[810]: E0916 12:07:15.015112     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rhxfx" podUID="1f7ed956-692d-4b25-9cbf-8f79cf304d25"
	Sep 16 12:07:20 embed-certs-132595 kubelet[810]: E0916 12:07:20.048702     810 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488440048488798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:20 embed-certs-132595 kubelet[810]: E0916 12:07:20.048744     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488440048488798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:24 embed-certs-132595 kubelet[810]: I0916 12:07:24.013926     810 scope.go:117] "RemoveContainer" containerID="44741d025e5263999f5f8c9432acd71b5a26fcf12bf28cc7832800f946d89504"
	Sep 16 12:07:24 embed-certs-132595 kubelet[810]: E0916 12:07:24.014179     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-8b494_kubernetes-dashboard(9237110e-9b2d-44fa-bb02-78444575a54b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494" podUID="9237110e-9b2d-44fa-bb02-78444575a54b"
	Sep 16 12:07:30 embed-certs-132595 kubelet[810]: E0916 12:07:30.015072     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rhxfx" podUID="1f7ed956-692d-4b25-9cbf-8f79cf304d25"
	Sep 16 12:07:30 embed-certs-132595 kubelet[810]: E0916 12:07:30.050345     810 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488450050162599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:30 embed-certs-132595 kubelet[810]: E0916 12:07:30.050383     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488450050162599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:39 embed-certs-132595 kubelet[810]: I0916 12:07:39.013699     810 scope.go:117] "RemoveContainer" containerID="44741d025e5263999f5f8c9432acd71b5a26fcf12bf28cc7832800f946d89504"
	Sep 16 12:07:39 embed-certs-132595 kubelet[810]: E0916 12:07:39.013898     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-8b494_kubernetes-dashboard(9237110e-9b2d-44fa-bb02-78444575a54b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494" podUID="9237110e-9b2d-44fa-bb02-78444575a54b"
	Sep 16 12:07:40 embed-certs-132595 kubelet[810]: E0916 12:07:40.051817     810 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488460051463531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:40 embed-certs-132595 kubelet[810]: E0916 12:07:40.051859     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488460051463531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:41 embed-certs-132595 kubelet[810]: E0916 12:07:41.014622     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rhxfx" podUID="1f7ed956-692d-4b25-9cbf-8f79cf304d25"
	Sep 16 12:07:50 embed-certs-132595 kubelet[810]: E0916 12:07:50.052792     810 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488470052585030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:50 embed-certs-132595 kubelet[810]: E0916 12:07:50.052840     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488470052585030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:07:53 embed-certs-132595 kubelet[810]: E0916 12:07:53.014732     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rhxfx" podUID="1f7ed956-692d-4b25-9cbf-8f79cf304d25"
	Sep 16 12:07:54 embed-certs-132595 kubelet[810]: I0916 12:07:54.013584     810 scope.go:117] "RemoveContainer" containerID="44741d025e5263999f5f8c9432acd71b5a26fcf12bf28cc7832800f946d89504"
	Sep 16 12:07:54 embed-certs-132595 kubelet[810]: E0916 12:07:54.013842     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-8b494_kubernetes-dashboard(9237110e-9b2d-44fa-bb02-78444575a54b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494" podUID="9237110e-9b2d-44fa-bb02-78444575a54b"
	Sep 16 12:08:00 embed-certs-132595 kubelet[810]: E0916 12:08:00.054628     810 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488480054445236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:08:00 embed-certs-132595 kubelet[810]: E0916 12:08:00.054663     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488480054445236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:08:05 embed-certs-132595 kubelet[810]: E0916 12:08:05.015468     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rhxfx" podUID="1f7ed956-692d-4b25-9cbf-8f79cf304d25"
	Sep 16 12:08:09 embed-certs-132595 kubelet[810]: I0916 12:08:09.013780     810 scope.go:117] "RemoveContainer" containerID="44741d025e5263999f5f8c9432acd71b5a26fcf12bf28cc7832800f946d89504"
	Sep 16 12:08:09 embed-certs-132595 kubelet[810]: E0916 12:08:09.013979     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-8b494_kubernetes-dashboard(9237110e-9b2d-44fa-bb02-78444575a54b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-8b494" podUID="9237110e-9b2d-44fa-bb02-78444575a54b"
	Sep 16 12:08:10 embed-certs-132595 kubelet[810]: E0916 12:08:10.055902     810 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488490055698254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 12:08:10 embed-certs-132595 kubelet[810]: E0916 12:08:10.055956     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726488490055698254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177613,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [85af30dcd81b23dfa2500bd1117ac41768a8ad3e876bce48f3481fad1a5e7f34] <==
	2024/09/16 12:04:07 Using namespace: kubernetes-dashboard
	2024/09/16 12:04:07 Using in-cluster config to connect to apiserver
	2024/09/16 12:04:07 Using secret token for csrf signing
	2024/09/16 12:04:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 12:04:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 12:04:07 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 12:04:07 Generating JWE encryption key
	2024/09/16 12:04:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 12:04:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 12:04:07 Initializing JWE encryption key from synchronized object
	2024/09/16 12:04:07 Creating in-cluster Sidecar client
	2024/09/16 12:04:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:04:07 Serving insecurely on HTTP port: 9090
	2024/09/16 12:04:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:05:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:05:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:06:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:06:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:07:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:07:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:08:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 12:04:07 Starting overwatch
	
	
	==> storage-provisioner [36129a02061e65d6dff1745e352dc83fd9a2c067faf534a1ba1117708bb93488] <==
	I0916 12:04:25.351399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 12:04:25.358717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 12:04:25.358769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 12:04:42.755902       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 12:04:42.755975       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"145c877d-a7a1-47fc-887a-f3ff6cf439ce", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-132595_b9990c31-303b-4371-9c97-8a4d7b37e64b became leader
	I0916 12:04:42.756051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_b9990c31-303b-4371-9c97-8a4d7b37e64b!
	I0916 12:04:42.856312       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-132595_b9990c31-303b-4371-9c97-8a4d7b37e64b!
	
	
	==> storage-provisioner [f26a86dedad239cdd812dc2bf8d8b766a273b91b9b32f869991b6e412b951104] <==
	I0916 12:03:55.112655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 12:04:25.115025       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132595 -n embed-certs-132595
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (518.797µs)
helpers_test.go:263: kubectl --context embed-certs-132595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.96s)

                                                
                                    

Test pass (226/306)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.45
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 20.1
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.08
21 TestBinaryMirror 0.78
22 TestOffline 78.92
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 172.93
35 TestAddons/parallel/InspektorGadget 10.66
40 TestAddons/parallel/Headlamp 16.37
41 TestAddons/parallel/CloudSpanner 5.49
43 TestAddons/parallel/NvidiaDevicePlugin 6.47
44 TestAddons/parallel/Yakd 10.64
45 TestAddons/StoppedEnableDisable 12.08
47 TestCertExpiration 225.9
49 TestForceSystemdFlag 23.63
50 TestForceSystemdEnv 32.05
52 TestKVMDriverInstallOrUpdate 4.62
56 TestErrorSpam/setup 24.52
57 TestErrorSpam/start 0.61
58 TestErrorSpam/status 0.88
59 TestErrorSpam/pause 1.52
60 TestErrorSpam/unpause 1.58
61 TestErrorSpam/stop 1.34
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 38.28
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 23.16
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.62
73 TestFunctional/serial/CacheCmd/cache/add_local 2.39
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 35.26
83 TestFunctional/serial/LogsCmd 1.41
84 TestFunctional/serial/LogsFileCmd 1.38
87 TestFunctional/parallel/ConfigCmd 0.38
89 TestFunctional/parallel/DryRun 0.55
90 TestFunctional/parallel/InternationalLanguage 0.23
91 TestFunctional/parallel/StatusCmd 1.28
96 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/SSHCmd 0.7
100 TestFunctional/parallel/CpCmd 1.93
102 TestFunctional/parallel/FileSync 0.26
103 TestFunctional/parallel/CertSync 1.81
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
111 TestFunctional/parallel/License 0.99
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
117 TestFunctional/parallel/ProfileCmd/profile_list 0.45
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
122 TestFunctional/parallel/MountCmd/specific-port 2.02
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
130 TestFunctional/parallel/Version/short 0.05
131 TestFunctional/parallel/Version/components 0.49
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
136 TestFunctional/parallel/ImageCommands/ImageBuild 3.67
137 TestFunctional/parallel/ImageCommands/Setup 1.77
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.87
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.74
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 150.37
159 TestMultiControlPlane/serial/DeployApp 6.41
160 TestMultiControlPlane/serial/PingHostFromPods 1.02
161 TestMultiControlPlane/serial/AddWorkerNode 31.98
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.66
164 TestMultiControlPlane/serial/CopyFile 15.68
165 TestMultiControlPlane/serial/StopSecondaryNode 12.47
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.65
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 227.31
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.49
172 TestMultiControlPlane/serial/StopCluster 35.64
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.46
175 TestMultiControlPlane/serial/AddSecondaryNode 66.67
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
180 TestJSONOutput/start/Command 66.64
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.69
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.6
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.73
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 40.31
206 TestKicCustomNetwork/use_default_bridge_network 23.29
207 TestKicExistingNetwork 23.33
208 TestKicCustomSubnet 23.72
209 TestKicStaticIP 26.35
210 TestMainNoArgs 0.04
211 TestMinikubeProfile 49.21
214 TestMountStart/serial/StartWithMountFirst 9.7
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 6.88
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 8.63
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 94.74
226 TestMultiNode/serial/DeployApp2Nodes 5.17
227 TestMultiNode/serial/PingHostFrom2Pods 0.7
228 TestMultiNode/serial/AddNode 25.89
230 TestMultiNode/serial/ProfileList 0.29
231 TestMultiNode/serial/CopyFile 9.05
232 TestMultiNode/serial/StopNode 2.13
234 TestMultiNode/serial/RestartKeepsNodes 98.68
236 TestMultiNode/serial/StopMultiNode 23.7
238 TestMultiNode/serial/ValidateNameConflict 23.45
243 TestPreload 120.18
245 TestScheduledStopUnix 100.2
248 TestInsufficientStorage 9.91
249 TestRunningBinaryUpgrade 75.79
252 TestMissingContainerUpgrade 199.88
254 TestStoppedBinaryUpgrade/Setup 2.82
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 29.28
257 TestStoppedBinaryUpgrade/Upgrade 157.14
258 TestNoKubernetes/serial/StartWithStopK8s 20.03
259 TestNoKubernetes/serial/Start 6.07
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
261 TestNoKubernetes/serial/ProfileList 0.9
262 TestNoKubernetes/serial/Stop 1.18
263 TestNoKubernetes/serial/StartNoArgs 10.65
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
272 TestNetworkPlugins/group/false 2.18
283 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
285 TestPause/serial/Start 40.75
286 TestPause/serial/SecondStartNoReconfiguration 24.64
287 TestNetworkPlugins/group/auto/Start 37.61
288 TestPause/serial/Pause 0.75
289 TestPause/serial/VerifyStatus 0.32
290 TestPause/serial/Unpause 0.65
291 TestPause/serial/PauseAgain 0.73
292 TestPause/serial/DeletePaused 2.69
293 TestPause/serial/VerifyDeletedResources 14.27
294 TestNetworkPlugins/group/auto/KubeletFlags 0.26
296 TestNetworkPlugins/group/kindnet/Start 67.38
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
300 TestNetworkPlugins/group/calico/Start 53.57
301 TestNetworkPlugins/group/enable-default-cni/Start 70.44
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/calico/KubeletFlags 0.25
305 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
307 TestNetworkPlugins/group/flannel/Start 52.42
308 TestNetworkPlugins/group/flannel/ControllerPod 6.01
309 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
311 TestNetworkPlugins/group/bridge/Start 62.14
312 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
314 TestNetworkPlugins/group/custom-flannel/Start 50.3
316 TestStartStop/group/old-k8s-version/serial/FirstStart 139.99
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
321 TestStartStop/group/old-k8s-version/serial/Stop 5.76
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
327 TestStartStop/group/old-k8s-version/serial/Pause 2.63
329 TestStartStop/group/no-preload/serial/FirstStart 55.7
332 TestStartStop/group/no-preload/serial/Stop 5.81
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
334 TestStartStop/group/no-preload/serial/SecondStart 261.82
335 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
338 TestStartStop/group/no-preload/serial/Pause 2.66
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.48
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 5.8
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.6
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.68
351 TestStartStop/group/newest-cni/serial/FirstStart 24.51
352 TestStartStop/group/newest-cni/serial/DeployApp 0
353 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.89
354 TestStartStop/group/newest-cni/serial/Stop 1.2
355 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
356 TestStartStop/group/newest-cni/serial/SecondStart 12.43
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
360 TestStartStop/group/newest-cni/serial/Pause 2.77
362 TestStartStop/group/embed-certs/serial/FirstStart 68.88
365 TestStartStop/group/embed-certs/serial/Stop 5.77
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
367 TestStartStop/group/embed-certs/serial/SecondStart 261.77
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
370 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
371 TestStartStop/group/embed-certs/serial/Pause 2.65
x
+
TestDownloadOnly/v1.20.0/json-events (26.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-534059 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-534059 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (26.453004089s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-534059
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-534059: exit status 85 (61.908811ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-534059 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |          |
	|         | -p download-only-534059        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:22:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:22:22.226240   11220 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:22.226482   11220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:22.226495   11220 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:22.226501   11220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:22.226714   11220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	W0916 10:22:22.226877   11220 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19651-3799/.minikube/config/config.json: open /home/jenkins/minikube-integration/19651-3799/.minikube/config/config.json: no such file or directory
	I0916 10:22:22.227493   11220 out.go:352] Setting JSON to true
	I0916 10:22:22.228370   11220 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":282,"bootTime":1726481860,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:22:22.228464   11220 start.go:139] virtualization: kvm guest
	I0916 10:22:22.231066   11220 out.go:97] [download-only-534059] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:22:22.231212   11220 notify.go:220] Checking for updates...
	W0916 10:22:22.231224   11220 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:22:22.232748   11220 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:22:22.234368   11220 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:22.235905   11220 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:22:22.237496   11220 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:22:22.239086   11220 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:22:22.242344   11220 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:22:22.242601   11220 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:22.264206   11220 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:22:22.264310   11220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:22.656735   11220 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:22:22.647591169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:22.656835   11220 docker.go:318] overlay module found
	I0916 10:22:22.658887   11220 out.go:97] Using the docker driver based on user configuration
	I0916 10:22:22.658916   11220 start.go:297] selected driver: docker
	I0916 10:22:22.658921   11220 start.go:901] validating driver "docker" against <nil>
	I0916 10:22:22.659000   11220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:22.708134   11220 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:22:22.699591603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:22.708331   11220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:22:22.709000   11220 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0916 10:22:22.709221   11220 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:22:22.711476   11220 out.go:169] Using Docker driver with root privileges
	I0916 10:22:22.713035   11220 cni.go:84] Creating CNI manager for ""
	I0916 10:22:22.713088   11220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:22:22.713102   11220 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:22:22.713175   11220 start.go:340] cluster config:
	{Name:download-only-534059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-534059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:22.714856   11220 out.go:97] Starting "download-only-534059" primary control-plane node in "download-only-534059" cluster
	I0916 10:22:22.714882   11220 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:22:22.716396   11220 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:22:22.716424   11220 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 10:22:22.716535   11220 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:22:22.732475   11220 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:22:22.732684   11220 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:22:22.732787   11220 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:22:23.191728   11220 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 10:22:23.191754   11220 cache.go:56] Caching tarball of preloaded images
	I0916 10:22:23.191898   11220 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 10:22:23.194299   11220 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 10:22:23.194333   11220 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:22:23.294800   11220 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 10:22:36.718738   11220 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:22:36.718833   11220 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:22:37.758586   11220 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 10:22:37.758932   11220 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/download-only-534059/config.json ...
	I0916 10:22:37.758966   11220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/download-only-534059/config.json: {Name:mk3fdb1eac4b3085d2e3606679427c45ffd246cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:37.759150   11220 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 10:22:37.759355   11220 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-534059 host does not exist
	  To start a cluster, run: "minikube start -p download-only-534059"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-534059
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (20.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-920673 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-920673 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (20.09975274s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (20.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-920673
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-920673: exit status 85 (62.050601ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-534059 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-534059        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-534059        | download-only-534059 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only        | download-only-920673 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-920673        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:22:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:22:49.078326   11636 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:49.078437   11636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:49.078446   11636 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:49.078450   11636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:49.078657   11636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:22:49.079218   11636 out.go:352] Setting JSON to true
	I0916 10:22:49.080040   11636 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":309,"bootTime":1726481860,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:22:49.080134   11636 start.go:139] virtualization: kvm guest
	I0916 10:22:49.082390   11636 out.go:97] [download-only-920673] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:22:49.082551   11636 notify.go:220] Checking for updates...
	I0916 10:22:49.083897   11636 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:22:49.085523   11636 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:49.087044   11636 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:22:49.088528   11636 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:22:49.089947   11636 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:22:49.092681   11636 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:22:49.092969   11636 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:49.115312   11636 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:22:49.115463   11636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:49.168571   11636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:22:49.159759564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:49.168682   11636 docker.go:318] overlay module found
	I0916 10:22:49.170701   11636 out.go:97] Using the docker driver based on user configuration
	I0916 10:22:49.170731   11636 start.go:297] selected driver: docker
	I0916 10:22:49.170737   11636 start.go:901] validating driver "docker" against <nil>
	I0916 10:22:49.170815   11636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:49.220561   11636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:22:49.211481532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:49.220884   11636 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:22:49.221608   11636 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0916 10:22:49.221814   11636 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:22:49.223738   11636 out.go:169] Using Docker driver with root privileges
	I0916 10:22:49.225086   11636 cni.go:84] Creating CNI manager for ""
	I0916 10:22:49.225149   11636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 10:22:49.225167   11636 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:22:49.225242   11636 start.go:340] cluster config:
	{Name:download-only-920673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-920673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:49.226919   11636 out.go:97] Starting "download-only-920673" primary control-plane node in "download-only-920673" cluster
	I0916 10:22:49.226946   11636 cache.go:121] Beginning downloading kic base image for docker with crio
	I0916 10:22:49.228237   11636 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:22:49.228264   11636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:49.228336   11636 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:22:49.245773   11636 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:22:49.245903   11636 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:22:49.245924   11636 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:22:49.245933   11636 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:22:49.245947   11636 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:22:50.058333   11636 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:22:50.058372   11636 cache.go:56] Caching tarball of preloaded images
	I0916 10:22:50.058543   11636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:50.060674   11636 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 10:22:50.060705   11636 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:22:50.233982   11636 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:23:07.425102   11636 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:23:07.425195   11636 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19651-3799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:23:08.267654   11636 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:23:08.268021   11636 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/download-only-920673/config.json ...
	I0916 10:23:08.268056   11636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/download-only-920673/config.json: {Name:mk6c74c1543a1090a7b20df5e2065c982bc96e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:08.268315   11636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:23:08.268522   11636 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19651-3799/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-920673 host does not exist
	  To start a cluster, run: "minikube start -p download-only-920673"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-920673
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.08s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-291625 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-291625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-291625
--- PASS: TestDownloadOnlyKic (1.08s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-597115 --alsologtostderr --binary-mirror http://127.0.0.1:44611 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-597115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-597115
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (78.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-882874 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-882874 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m13.959313575s)
helpers_test.go:175: Cleaning up "offline-crio-882874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-882874
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-882874: (4.956287954s)
--- PASS: TestOffline (78.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-821781
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-821781: exit status 85 (55.930983ms)

                                                
                                                
-- stdout --
	* Profile "addons-821781" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-821781"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-821781
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-821781: exit status 85 (53.313231ms)

                                                
                                                
-- stdout --
	* Profile "addons-821781" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-821781"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (172.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-821781 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-821781 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m52.928850865s)
--- PASS: TestAddons/Setup (172.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fmlhp" [2432b1c2-ccad-4646-9941-b5be3a66cf1b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004404368s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-821781
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-821781: (5.649532605s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-821781 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-xfkdj" [cad0d003-8455-4239-998d-1327610acea6] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-xfkdj" [cad0d003-8455-4239-998d-1327610acea6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-xfkdj" [cad0d003-8455-4239-998d-1327610acea6] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003385557s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-821781 addons disable headlamp --alsologtostderr -v=1: (5.634139042s)
--- PASS: TestAddons/parallel/Headlamp (16.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-hpwnk" [d9415ac6-16e5-4e32-8d52-7f3dc1c3dc38] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003913303s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-821781
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fs477" [483985a6-fb0e-4ceb-845b-2154000afac7] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003993686s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-821781
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-sp84b" [52bdacc1-06fc-4c4e-a07d-ee1f543da816] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003603241s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-821781 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-821781 addons disable yakd --alsologtostderr -v=1: (5.637219846s)
--- PASS: TestAddons/parallel/Yakd (10.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-821781
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-821781: (11.840751284s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-821781
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-821781
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-821781
--- PASS: TestAddons/StoppedEnableDisable (12.08s)

                                                
                                    
x
+
TestCertExpiration (225.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-997173 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-997173 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.251663519s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-997173 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-997173 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.322782006s)
helpers_test.go:175: Cleaning up "cert-expiration-997173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-997173
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-997173: (3.323371681s)
--- PASS: TestCertExpiration (225.90s)

                                                
                                    
x
+
TestForceSystemdFlag (23.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-587021 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-587021 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.953316308s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-587021 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-587021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-587021
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-587021: (2.405685989s)
--- PASS: TestForceSystemdFlag (23.63s)

                                                
                                    
x
+
TestForceSystemdEnv (32.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-807944 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-807944 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.824977518s)
helpers_test.go:175: Cleaning up "force-systemd-env-807944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-807944
E0916 11:05:02.444011   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-807944: (5.222016453s)
--- PASS: TestForceSystemdEnv (32.05s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.62s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.62s)

                                                
                                    
x
+
TestErrorSpam/setup (24.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-530798 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-530798 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-530798 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-530798 --driver=docker  --container-runtime=crio: (24.520687168s)
error_spam_test.go:91: acceptable stderr: "E0916 10:32:51.307726   29660 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error"
--- PASS: TestErrorSpam/setup (24.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 stop: (1.169209004s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530798 --log_dir /tmp/nospam-530798 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19651-3799/.minikube/files/etc/test/nested/copy/11208/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546931 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-546931 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.283502902s)
--- PASS: TestFunctional/serial/StartWithProxy (38.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (23.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546931 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-546931 --alsologtostderr -v=8: (23.154127261s)
functional_test.go:663: soft start took 23.15486057s for "functional-546931" cluster.
--- PASS: TestFunctional/serial/SoftStart (23.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 cache add registry.k8s.io/pause:3.1: (1.490461357s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 cache add registry.k8s.io/pause:3.3: (1.661466111s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 cache add registry.k8s.io/pause:latest: (1.468888751s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-546931 /tmp/TestFunctionalserialCacheCmdcacheadd_local1321517862/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cache add minikube-local-cache-test:functional-546931
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 cache add minikube-local-cache-test:functional-546931: (2.035955086s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cache delete minikube-local-cache-test:functional-546931
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-546931
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (262.627729ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 cache reload: (1.325803074s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 kubectl -- --context functional-546931 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-546931 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546931 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-546931 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.26070166s)
functional_test.go:761: restart took 35.260827152s for "functional-546931" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs: (1.407936541s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 logs --file /tmp/TestFunctionalserialLogsFileCmd210923401/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 logs --file /tmp/TestFunctionalserialLogsFileCmd210923401/001/logs.txt: (1.37668892s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 config get cpus: exit status 14 (60.606995ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 config get cpus: exit status 14 (69.217577ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546931 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-546931 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (255.277404ms)

                                                
                                                
-- stdout --
	* [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:35:00.703154   48452 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:00.703332   48452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.703345   48452 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:00.703352   48452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.703699   48452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:35:00.704339   48452 out.go:352] Setting JSON to false
	I0916 10:35:00.705699   48452 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1041,"bootTime":1726481860,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:00.705783   48452 start.go:139] virtualization: kvm guest
	I0916 10:35:00.708621   48452 out.go:177] * [functional-546931] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:00.710137   48452 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:00.710166   48452 notify.go:220] Checking for updates...
	I0916 10:35:00.712250   48452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:00.714750   48452 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:35:00.716169   48452 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:35:00.717946   48452 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:00.719397   48452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:00.721281   48452 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:00.722069   48452 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:00.758576   48452 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:35:00.758748   48452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:00.850172   48452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:00.838949032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:00.850313   48452 docker.go:318] overlay module found
	I0916 10:35:00.851999   48452 out.go:177] * Using the docker driver based on existing profile
	I0916 10:35:00.853984   48452 start.go:297] selected driver: docker
	I0916 10:35:00.854022   48452 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:00.854152   48452 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:00.857056   48452 out.go:201] 
	W0916 10:35:00.858874   48452 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 10:35:00.860132   48452 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546931 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546931 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-546931 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (231.91644ms)

                                                
                                                
-- stdout --
	* [functional-546931] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:35:00.461288   48255 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:00.461520   48255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.461532   48255 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:00.461538   48255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:00.461884   48255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:35:00.462441   48255 out.go:352] Setting JSON to false
	I0916 10:35:00.463506   48255 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1040,"bootTime":1726481860,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:00.463602   48255 start.go:139] virtualization: kvm guest
	I0916 10:35:00.466366   48255 out.go:177] * [functional-546931] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:00.468039   48255 notify.go:220] Checking for updates...
	I0916 10:35:00.468513   48255 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:00.470887   48255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:00.472442   48255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 10:35:00.474329   48255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 10:35:00.476075   48255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:00.477581   48255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:00.480258   48255 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:00.480946   48255 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:00.508921   48255 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:35:00.509036   48255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:35:00.595508   48255 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:35:00.583777993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:35:00.595644   48255 docker.go:318] overlay module found
	I0916 10:35:00.597504   48255 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0916 10:35:00.598940   48255 start.go:297] selected driver: docker
	I0916 10:35:00.598954   48255 start.go:901] validating driver "docker" against &{Name:functional-546931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-546931 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:00.599040   48255 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:00.601611   48255 out.go:201] 
	W0916 10:35:00.603132   48255 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:35:00.604401   48255 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh -n functional-546931 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cp functional-546931:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2999033726/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh -n functional-546931 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh -n functional-546931 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11208/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo cat /etc/test/nested/copy/11208/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11208.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo cat /etc/ssl/certs/11208.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11208.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo cat /usr/share/ca-certificates/11208.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/112082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo cat /etc/ssl/certs/112082.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/112082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo cat /usr/share/ca-certificates/112082.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh "sudo systemctl is-active docker": exit status 1 (264.720716ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh "sudo systemctl is-active containerd": exit status 1 (243.098161ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "396.908043ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.187754ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "355.511578ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.482134ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdspecific-port2367125525/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.777625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdspecific-port2367125525/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh "sudo umount -f /mount-9p": exit status 1 (315.028391ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-546931 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdspecific-port2367125525/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-546931 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-546931 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-546931 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 49913: os: process already finished
helpers_test.go:508: unable to kill pid 49688: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-546931 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-546931 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T" /mount1: exit status 1 (364.876418ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-546931 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-546931 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1138778930/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-546931 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-546931
localhost/kicbase/echo-server:functional-546931
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-546931 image ls --format short --alsologtostderr:
I0916 10:35:15.614534   56038 out.go:345] Setting OutFile to fd 1 ...
I0916 10:35:15.614825   56038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:15.614830   56038 out.go:358] Setting ErrFile to fd 2...
I0916 10:35:15.614834   56038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:15.615073   56038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
I0916 10:35:15.615761   56038 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:15.615871   56038 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:15.616443   56038 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:35:15.635665   56038 ssh_runner.go:195] Run: systemctl --version
I0916 10:35:15.635714   56038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:35:15.655153   56038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:35:15.745967   56038 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-546931 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/kicbase/echo-server           | functional-546931  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-546931  | 95a410f2633ba | 3.33kB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-546931 image ls --format table --alsologtostderr:
I0916 10:35:16.125498   56309 out.go:345] Setting OutFile to fd 1 ...
I0916 10:35:16.125620   56309 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:16.125629   56309 out.go:358] Setting ErrFile to fd 2...
I0916 10:35:16.125633   56309 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:16.125824   56309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
I0916 10:35:16.126408   56309 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:16.126500   56309 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:16.126851   56309 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:35:16.144553   56309 ssh_runner.go:195] Run: systemctl --version
I0916 10:35:16.144600   56309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:35:16.162662   56309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:35:16.253414   56309 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-546931 image ls --format json --alsologtostderr:
[{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[
"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f1100
0a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-546931"],"size":"4943877"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e
4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"95a410f2633bab15a957be73ddb9679755197bcb4a2f9aa3f7ae12f47c972dfb","repoDigests":["localhost/minikube-local-cache-test@sha256:d1be08df5ce3ad002b8423d52bdcfad985d4efea79d5d5f34ceb2561836d6d92"],"repoTags":["
localhost/minikube-local-cache-test:functional-546931"],"size":"3330"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7e
daab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-546931 image ls --format json --alsologtostderr:
I0916 10:35:15.904392   56186 out.go:345] Setting OutFile to fd 1 ...
I0916 10:35:15.904506   56186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:15.904516   56186 out.go:358] Setting ErrFile to fd 2...
I0916 10:35:15.904520   56186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:15.904753   56186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
I0916 10:35:15.905380   56186 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:15.905513   56186 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:15.905907   56186 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:35:15.924518   56186 ssh_runner.go:195] Run: systemctl --version
I0916 10:35:15.924571   56186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:35:15.942413   56186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:35:16.033551   56186 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-546931 image ls --format yaml --alsologtostderr:
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-546931
size: "4943877"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 95a410f2633bab15a957be73ddb9679755197bcb4a2f9aa3f7ae12f47c972dfb
repoDigests:
- localhost/minikube-local-cache-test@sha256:d1be08df5ce3ad002b8423d52bdcfad985d4efea79d5d5f34ceb2561836d6d92
repoTags:
- localhost/minikube-local-cache-test:functional-546931
size: "3330"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-546931 image ls --format yaml --alsologtostderr:
I0916 10:35:15.671541   56074 out.go:345] Setting OutFile to fd 1 ...
I0916 10:35:15.671783   56074 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:15.671791   56074 out.go:358] Setting ErrFile to fd 2...
I0916 10:35:15.671795   56074 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:15.671988   56074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
I0916 10:35:15.672665   56074 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:15.672768   56074 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:15.673135   56074 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:35:15.690655   56074 ssh_runner.go:195] Run: systemctl --version
I0916 10:35:15.690714   56074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:35:15.709134   56074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:35:15.806038   56074 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546931 ssh pgrep buildkitd: exit status 1 (247.647383ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image build -t localhost/my-image:functional-546931 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 image build -t localhost/my-image:functional-546931 testdata/build --alsologtostderr: (3.20921408s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-546931 image build -t localhost/my-image:functional-546931 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f0d3429a34e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-546931
--> 036ba01a692
Successfully tagged localhost/my-image:functional-546931
036ba01a6924146b2ac0172b68297ce831dd51577c39605886653e94602425fd
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-546931 image build -t localhost/my-image:functional-546931 testdata/build --alsologtostderr:
I0916 10:35:16.076880   56287 out.go:345] Setting OutFile to fd 1 ...
I0916 10:35:16.077063   56287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:16.077076   56287 out.go:358] Setting ErrFile to fd 2...
I0916 10:35:16.077083   56287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:35:16.077418   56287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
I0916 10:35:16.078092   56287 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:16.079307   56287 config.go:182] Loaded profile config "functional-546931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:35:16.079772   56287 cli_runner.go:164] Run: docker container inspect functional-546931 --format={{.State.Status}}
I0916 10:35:16.099760   56287 ssh_runner.go:195] Run: systemctl --version
I0916 10:35:16.099850   56287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546931
I0916 10:35:16.119075   56287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/functional-546931/id_rsa Username:docker}
I0916 10:35:16.213324   56287 build_images.go:161] Building image from path: /tmp/build.3688083202.tar
I0916 10:35:16.213442   56287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 10:35:16.222040   56287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3688083202.tar
I0916 10:35:16.225314   56287 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3688083202.tar: stat -c "%s %y" /var/lib/minikube/build/build.3688083202.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3688083202.tar': No such file or directory
I0916 10:35:16.225370   56287 ssh_runner.go:362] scp /tmp/build.3688083202.tar --> /var/lib/minikube/build/build.3688083202.tar (3072 bytes)
I0916 10:35:16.247624   56287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3688083202
I0916 10:35:16.256348   56287 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3688083202 -xf /var/lib/minikube/build/build.3688083202.tar
I0916 10:35:16.265069   56287 crio.go:315] Building image: /var/lib/minikube/build/build.3688083202
I0916 10:35:16.265138   56287 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-546931 /var/lib/minikube/build/build.3688083202 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0916 10:35:19.220190   56287 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-546931 /var/lib/minikube/build/build.3688083202 --cgroup-manager=cgroupfs: (2.955027095s)
I0916 10:35:19.220252   56287 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3688083202
I0916 10:35:19.228629   56287 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3688083202.tar
I0916 10:35:19.236334   56287 build_images.go:217] Built localhost/my-image:functional-546931 from /tmp/build.3688083202.tar
I0916 10:35:19.236366   56287 build_images.go:133] succeeded building to: functional-546931
I0916 10:35:19.236371   56287 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.74770787s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-546931
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image load --daemon kicbase/echo-server:functional-546931 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 image load --daemon kicbase/echo-server:functional-546931 --alsologtostderr: (1.112777758s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image load --daemon kicbase/echo-server:functional-546931 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-546931
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image load --daemon kicbase/echo-server:functional-546931 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image save kicbase/echo-server:functional-546931 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-546931 image save kicbase/echo-server:functional-546931 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.743173053s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image rm kicbase/echo-server:functional-546931 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-546931
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-546931 image save --daemon kicbase/echo-server:functional-546931 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-546931
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-546931 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-546931
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-546931
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-546931
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (150.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-107957 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0916 10:37:28.631554   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:38:50.553713   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-107957 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m29.683467694s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (150.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-107957 -- rollout status deployment/busybox: (4.420377503s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-4rfjs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-m2jh6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-plmdj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-4rfjs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-m2jh6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-plmdj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-4rfjs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-m2jh6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-plmdj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-4rfjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-4rfjs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-m2jh6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-m2jh6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-plmdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-107957 -- exec busybox-7dff88458-plmdj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-107957 -v=7 --alsologtostderr
E0916 10:40:02.444673   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:02.451165   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:02.462607   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:02.484034   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:02.525519   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:02.607011   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:02.769141   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:03.090568   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:03.732472   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:05.013835   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:07.575913   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-107957 -v=7 --alsologtostderr: (31.132031257s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status --output json -v=7 --alsologtostderr
E0916 10:40:12.697465   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp testdata/cp-test.txt ha-107957:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile432092999/001/cp-test_ha-107957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957:/home/docker/cp-test.txt ha-107957-m02:/home/docker/cp-test_ha-107957_ha-107957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test_ha-107957_ha-107957-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957:/home/docker/cp-test.txt ha-107957-m03:/home/docker/cp-test_ha-107957_ha-107957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test_ha-107957_ha-107957-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957:/home/docker/cp-test.txt ha-107957-m04:/home/docker/cp-test_ha-107957_ha-107957-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test_ha-107957_ha-107957-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp testdata/cp-test.txt ha-107957-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile432092999/001/cp-test_ha-107957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m02:/home/docker/cp-test.txt ha-107957:/home/docker/cp-test_ha-107957-m02_ha-107957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test_ha-107957-m02_ha-107957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m02:/home/docker/cp-test.txt ha-107957-m03:/home/docker/cp-test_ha-107957-m02_ha-107957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test_ha-107957-m02_ha-107957-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m02:/home/docker/cp-test.txt ha-107957-m04:/home/docker/cp-test_ha-107957-m02_ha-107957-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test_ha-107957-m02_ha-107957-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp testdata/cp-test.txt ha-107957-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile432092999/001/cp-test_ha-107957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt ha-107957:/home/docker/cp-test_ha-107957-m03_ha-107957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test_ha-107957-m03_ha-107957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt ha-107957-m02:/home/docker/cp-test_ha-107957-m03_ha-107957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test_ha-107957-m03_ha-107957-m02.txt"
E0916 10:40:22.939502   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m03:/home/docker/cp-test.txt ha-107957-m04:/home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test_ha-107957-m03_ha-107957-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp testdata/cp-test.txt ha-107957-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile432092999/001/cp-test_ha-107957-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt ha-107957:/home/docker/cp-test_ha-107957-m04_ha-107957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957 "sudo cat /home/docker/cp-test_ha-107957-m04_ha-107957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt ha-107957-m02:/home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m02 "sudo cat /home/docker/cp-test_ha-107957-m04_ha-107957-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 cp ha-107957-m04:/home/docker/cp-test.txt ha-107957-m03:/home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 ssh -n ha-107957-m03 "sudo cat /home/docker/cp-test_ha-107957-m04_ha-107957-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 node stop m02 -v=7 --alsologtostderr: (11.788649397s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr: exit status 7 (680.827405ms)

                                                
                                                
-- stdout --
	ha-107957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107957-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:40:39.600696   78900 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:39.600827   78900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:39.600838   78900 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:39.600844   78900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:39.601048   78900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:40:39.601244   78900 out.go:352] Setting JSON to false
	I0916 10:40:39.601278   78900 mustload.go:65] Loading cluster: ha-107957
	I0916 10:40:39.601394   78900 notify.go:220] Checking for updates...
	I0916 10:40:39.601774   78900 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:39.601796   78900 status.go:255] checking status of ha-107957 ...
	I0916 10:40:39.602320   78900 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:40:39.620577   78900 status.go:330] ha-107957 host status = "Running" (err=<nil>)
	I0916 10:40:39.620614   78900 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:40:39.620926   78900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957
	I0916 10:40:39.640855   78900 host.go:66] Checking if "ha-107957" exists ...
	I0916 10:40:39.641201   78900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:40:39.641253   78900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957
	I0916 10:40:39.665184   78900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957/id_rsa Username:docker}
	I0916 10:40:39.762805   78900 ssh_runner.go:195] Run: systemctl --version
	I0916 10:40:39.766727   78900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:39.779147   78900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:40:39.837026   78900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 10:40:39.826353111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:40:39.837588   78900 kubeconfig.go:125] found "ha-107957" server: "https://192.168.49.254:8443"
	I0916 10:40:39.837622   78900 api_server.go:166] Checking apiserver status ...
	I0916 10:40:39.837657   78900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:40:39.848929   78900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1512/cgroup
	I0916 10:40:39.858614   78900 api_server.go:182] apiserver freezer: "8:freezer:/docker/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/crio/crio-b1d6cc64c9b2c6f964d9cfedd269b3427f97e09a546dab8177407bdf75af651a"
	I0916 10:40:39.858691   78900 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8934c54a2cf07d0baf6d31e58de30a0f2295d61ee3f8b8d6adbde71e0738b0dd/crio/crio-b1d6cc64c9b2c6f964d9cfedd269b3427f97e09a546dab8177407bdf75af651a/freezer.state
	I0916 10:40:39.866746   78900 api_server.go:204] freezer state: "THAWED"
	I0916 10:40:39.866785   78900 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 10:40:39.870337   78900 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 10:40:39.870363   78900 status.go:422] ha-107957 apiserver status = Running (err=<nil>)
	I0916 10:40:39.870373   78900 status.go:257] ha-107957 status: &{Name:ha-107957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:40:39.870398   78900 status.go:255] checking status of ha-107957-m02 ...
	I0916 10:40:39.870632   78900 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:40:39.887262   78900 status.go:330] ha-107957-m02 host status = "Stopped" (err=<nil>)
	I0916 10:40:39.887283   78900 status.go:343] host is not running, skipping remaining checks
	I0916 10:40:39.887289   78900 status.go:257] ha-107957-m02 status: &{Name:ha-107957-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:40:39.887307   78900 status.go:255] checking status of ha-107957-m03 ...
	I0916 10:40:39.887539   78900 cli_runner.go:164] Run: docker container inspect ha-107957-m03 --format={{.State.Status}}
	I0916 10:40:39.907505   78900 status.go:330] ha-107957-m03 host status = "Running" (err=<nil>)
	I0916 10:40:39.907536   78900 host.go:66] Checking if "ha-107957-m03" exists ...
	I0916 10:40:39.907918   78900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m03
	I0916 10:40:39.925896   78900 host.go:66] Checking if "ha-107957-m03" exists ...
	I0916 10:40:39.926199   78900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:40:39.926241   78900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m03
	I0916 10:40:39.945319   78900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m03/id_rsa Username:docker}
	I0916 10:40:40.038474   78900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:40.049418   78900 kubeconfig.go:125] found "ha-107957" server: "https://192.168.49.254:8443"
	I0916 10:40:40.049448   78900 api_server.go:166] Checking apiserver status ...
	I0916 10:40:40.049489   78900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:40:40.059565   78900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1404/cgroup
	I0916 10:40:40.068150   78900 api_server.go:182] apiserver freezer: "8:freezer:/docker/8104972b8d805a2f2d77060a0bc5853f8e7de05a054ade61205686899c5f4dc5/crio/crio-c22f16e216e005d90837c8d96eaf6e9d8364d0def3ccf855d8b9bb8ae8a53abd"
	I0916 10:40:40.068202   78900 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8104972b8d805a2f2d77060a0bc5853f8e7de05a054ade61205686899c5f4dc5/crio/crio-c22f16e216e005d90837c8d96eaf6e9d8364d0def3ccf855d8b9bb8ae8a53abd/freezer.state
	I0916 10:40:40.075889   78900 api_server.go:204] freezer state: "THAWED"
	I0916 10:40:40.075916   78900 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 10:40:40.079566   78900 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 10:40:40.079611   78900 status.go:422] ha-107957-m03 apiserver status = Running (err=<nil>)
	I0916 10:40:40.079623   78900 status.go:257] ha-107957-m03 status: &{Name:ha-107957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:40:40.079648   78900 status.go:255] checking status of ha-107957-m04 ...
	I0916 10:40:40.079885   78900 cli_runner.go:164] Run: docker container inspect ha-107957-m04 --format={{.State.Status}}
	I0916 10:40:40.097747   78900 status.go:330] ha-107957-m04 host status = "Running" (err=<nil>)
	I0916 10:40:40.097774   78900 host.go:66] Checking if "ha-107957-m04" exists ...
	I0916 10:40:40.098026   78900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107957-m04
	I0916 10:40:40.115103   78900 host.go:66] Checking if "ha-107957-m04" exists ...
	I0916 10:40:40.115362   78900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:40:40.115410   78900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107957-m04
	I0916 10:40:40.132695   78900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/ha-107957-m04/id_rsa Username:docker}
	I0916 10:40:40.226648   78900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:40.237476   78900 status.go:257] ha-107957-m04 status: &{Name:ha-107957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-107957 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-107957 -v=7 --alsologtostderr
E0916 10:41:06.690066   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:24.383170   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:34.395827   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-107957 -v=7 --alsologtostderr: (36.548751723s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-107957 --wait=true -v=7 --alsologtostderr
E0916 10:42:46.305285   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-107957 --wait=true -v=7 --alsologtostderr: (3m10.6627903s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-107957
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 stop -v=7 --alsologtostderr
E0916 10:45:30.147599   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-107957 stop -v=7 --alsologtostderr: (35.537320128s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr: exit status 7 (106.526204ms)

                                                
                                                
-- stdout --
	ha-107957
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107957-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107957-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:42.402368   98094 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:42.402475   98094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:42.402481   98094 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:42.402487   98094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:42.402705   98094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:45:42.402897   98094 out.go:352] Setting JSON to false
	I0916 10:45:42.402931   98094 mustload.go:65] Loading cluster: ha-107957
	I0916 10:45:42.402971   98094 notify.go:220] Checking for updates...
	I0916 10:45:42.403401   98094 config.go:182] Loaded profile config "ha-107957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:42.403418   98094 status.go:255] checking status of ha-107957 ...
	I0916 10:45:42.403879   98094 cli_runner.go:164] Run: docker container inspect ha-107957 --format={{.State.Status}}
	I0916 10:45:42.421988   98094 status.go:330] ha-107957 host status = "Stopped" (err=<nil>)
	I0916 10:45:42.422042   98094 status.go:343] host is not running, skipping remaining checks
	I0916 10:45:42.422050   98094 status.go:257] ha-107957 status: &{Name:ha-107957 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:42.422088   98094 status.go:255] checking status of ha-107957-m02 ...
	I0916 10:45:42.422451   98094 cli_runner.go:164] Run: docker container inspect ha-107957-m02 --format={{.State.Status}}
	I0916 10:45:42.442544   98094 status.go:330] ha-107957-m02 host status = "Stopped" (err=<nil>)
	I0916 10:45:42.442574   98094 status.go:343] host is not running, skipping remaining checks
	I0916 10:45:42.442582   98094 status.go:257] ha-107957-m02 status: &{Name:ha-107957-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:42.442609   98094 status.go:255] checking status of ha-107957-m04 ...
	I0916 10:45:42.442942   98094 cli_runner.go:164] Run: docker container inspect ha-107957-m04 --format={{.State.Status}}
	I0916 10:45:42.463872   98094 status.go:330] ha-107957-m04 host status = "Stopped" (err=<nil>)
	I0916 10:45:42.463948   98094 status.go:343] host is not running, skipping remaining checks
	I0916 10:45:42.463961   98094 status.go:257] ha-107957-m04 status: &{Name:ha-107957-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-107957 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-107957 --control-plane -v=7 --alsologtostderr: (1m5.817165534s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-107957 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-170201 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-170201 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.63940378s)
--- PASS: TestJSONOutput/start/Command (66.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-170201 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-170201 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-170201 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-170201 --output=json --user=testUser: (5.731891217s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-743423 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-743423 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.131987ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b44cefa2-8ef2-495c-b483-cd04bda0b1d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-743423] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"710ed8a2-50fd-420c-82b8-81b5ac34bc5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"b2e3ee69-3056-40f3-8fd2-1978b9b04310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad3fb986-26c2-4d0c-9f2b-c3be97cfb0f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig"}}
	{"specversion":"1.0","id":"73094c21-c914-40c7-858b-c49676bd707d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube"}}
	{"specversion":"1.0","id":"21aa8797-2dab-481c-80bc-41bb54071b67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"997ba486-2504-4cd1-9ac0-f1d11d74ab54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2b1068f-a3b5-46ac-a3f4-aaf8c804c239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-743423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-743423
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-825095 --network=
E0916 10:50:02.444006   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-825095 --network=: (38.272581468s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-825095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-825095
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-825095: (2.015740448s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-766552 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-766552 --network=bridge: (21.43580885s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-766552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-766552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-766552: (1.830967969s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.29s)

                                                
                                    
x
+
TestKicExistingNetwork (23.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-712826 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-712826 --network=existing-network: (21.35068396s)
helpers_test.go:175: Cleaning up "existing-network-712826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-712826
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-712826: (1.828272284s)
--- PASS: TestKicExistingNetwork (23.33s)

                                                
                                    
x
+
TestKicCustomSubnet (23.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-000968 --subnet=192.168.60.0/24
E0916 10:51:06.690268   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-000968 --subnet=192.168.60.0/24: (21.718825717s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-000968 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-000968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-000968
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-000968: (1.985099478s)
--- PASS: TestKicCustomSubnet (23.72s)

                                                
                                    
x
+
TestKicStaticIP (26.35s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-923583 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-923583 --static-ip=192.168.200.200: (24.286419669s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-923583 ip
helpers_test.go:175: Cleaning up "static-ip-923583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-923583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-923583: (1.936638445s)
--- PASS: TestKicStaticIP (26.35s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-978914 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-978914 --driver=docker  --container-runtime=crio: (20.232352704s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-991750 --driver=docker  --container-runtime=crio
E0916 10:52:29.757791   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-991750 --driver=docker  --container-runtime=crio: (23.860930809s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-978914
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-991750
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-991750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-991750
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-991750: (1.82060044s)
helpers_test.go:175: Cleaning up "first-978914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-978914
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-978914: (2.201719687s)
--- PASS: TestMinikubeProfile (49.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-070941 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-070941 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.700079717s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-070941 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-085030 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-085030 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.874097996s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-085030 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-070941 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-070941 --alsologtostderr -v=5: (1.620371392s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-085030 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-085030
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-085030: (1.172739254s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-085030
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-085030: (7.632538163s)
--- PASS: TestMountStart/serial/RestartStopped (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-085030 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026168 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026168 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.281951603s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-026168 -- rollout status deployment/busybox: (3.837323165s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-qt9rx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-z8csk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-qt9rx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-z8csk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-qt9rx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-z8csk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-qt9rx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-qt9rx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-z8csk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026168 -- exec busybox-7dff88458-z8csk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026168 -v 3 --alsologtostderr
E0916 10:55:02.444631   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-026168 -v 3 --alsologtostderr: (25.277576794s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.89s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp testdata/cp-test.txt multinode-026168:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168:/home/docker/cp-test.txt multinode-026168-m02:/home/docker/cp-test_multinode-026168_multinode-026168-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m02 "sudo cat /home/docker/cp-test_multinode-026168_multinode-026168-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168:/home/docker/cp-test.txt multinode-026168-m03:/home/docker/cp-test_multinode-026168_multinode-026168-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m03 "sudo cat /home/docker/cp-test_multinode-026168_multinode-026168-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp testdata/cp-test.txt multinode-026168-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt multinode-026168:/home/docker/cp-test_multinode-026168-m02_multinode-026168.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168 "sudo cat /home/docker/cp-test_multinode-026168-m02_multinode-026168.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168-m02:/home/docker/cp-test.txt multinode-026168-m03:/home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m03 "sudo cat /home/docker/cp-test_multinode-026168-m02_multinode-026168-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp testdata/cp-test.txt multinode-026168-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2288589271/001/cp-test_multinode-026168-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt multinode-026168:/home/docker/cp-test_multinode-026168-m03_multinode-026168.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168 "sudo cat /home/docker/cp-test_multinode-026168-m03_multinode-026168.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 cp multinode-026168-m03:/home/docker/cp-test.txt multinode-026168-m02:/home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 ssh -n multinode-026168-m02 "sudo cat /home/docker/cp-test_multinode-026168-m03_multinode-026168-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 node stop m03: (1.17590333s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026168 status: exit status 7 (475.764444ms)

                                                
                                                
-- stdout --
	multinode-026168
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026168-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026168-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr: exit status 7 (482.712205ms)

                                                
                                                
-- stdout --
	multinode-026168
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026168-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026168-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:35.987087  164443 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:35.987191  164443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:35.987198  164443 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:35.987203  164443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:35.987411  164443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:55:35.987578  164443 out.go:352] Setting JSON to false
	I0916 10:55:35.987607  164443 mustload.go:65] Loading cluster: multinode-026168
	I0916 10:55:35.987719  164443 notify.go:220] Checking for updates...
	I0916 10:55:35.987997  164443 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:55:35.988010  164443 status.go:255] checking status of multinode-026168 ...
	I0916 10:55:35.988442  164443 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:55:36.011655  164443 status.go:330] multinode-026168 host status = "Running" (err=<nil>)
	I0916 10:55:36.011680  164443 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:55:36.011965  164443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168
	I0916 10:55:36.031601  164443 host.go:66] Checking if "multinode-026168" exists ...
	I0916 10:55:36.031877  164443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:55:36.031944  164443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168
	I0916 10:55:36.051137  164443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168/id_rsa Username:docker}
	I0916 10:55:36.142424  164443 ssh_runner.go:195] Run: systemctl --version
	I0916 10:55:36.146684  164443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:55:36.156833  164443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:55:36.215917  164443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-16 10:55:36.205767604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:55:36.216457  164443 kubeconfig.go:125] found "multinode-026168" server: "https://192.168.67.2:8443"
	I0916 10:55:36.216485  164443 api_server.go:166] Checking apiserver status ...
	I0916 10:55:36.216520  164443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:55:36.227220  164443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1502/cgroup
	I0916 10:55:36.235614  164443 api_server.go:182] apiserver freezer: "8:freezer:/docker/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/crio/crio-fd0447db4a560a60ebcfda53d853a3e402c5897ca07bff9ef1397e4a880e4a17"
	I0916 10:55:36.235685  164443 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/23ba806c052455767ddd23d92fc2c2c28dbd39ee04340ce9ecd62fd45e9eff74/crio/crio-fd0447db4a560a60ebcfda53d853a3e402c5897ca07bff9ef1397e4a880e4a17/freezer.state
	I0916 10:55:36.243550  164443 api_server.go:204] freezer state: "THAWED"
	I0916 10:55:36.243585  164443 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:55:36.247391  164443 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:55:36.247414  164443 status.go:422] multinode-026168 apiserver status = Running (err=<nil>)
	I0916 10:55:36.247423  164443 status.go:257] multinode-026168 status: &{Name:multinode-026168 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:55:36.247439  164443 status.go:255] checking status of multinode-026168-m02 ...
	I0916 10:55:36.247660  164443 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:55:36.265589  164443 status.go:330] multinode-026168-m02 host status = "Running" (err=<nil>)
	I0916 10:55:36.265615  164443 host.go:66] Checking if "multinode-026168-m02" exists ...
	I0916 10:55:36.265961  164443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-026168-m02
	I0916 10:55:36.283350  164443 host.go:66] Checking if "multinode-026168-m02" exists ...
	I0916 10:55:36.283622  164443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:55:36.283654  164443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-026168-m02
	I0916 10:55:36.302718  164443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3799/.minikube/machines/multinode-026168-m02/id_rsa Username:docker}
	I0916 10:55:36.394383  164443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:55:36.405953  164443 status.go:257] multinode-026168-m02 status: &{Name:multinode-026168-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:55:36.406001  164443 status.go:255] checking status of multinode-026168-m03 ...
	I0916 10:55:36.406234  164443 cli_runner.go:164] Run: docker container inspect multinode-026168-m03 --format={{.State.Status}}
	I0916 10:55:36.424933  164443 status.go:330] multinode-026168-m03 host status = "Stopped" (err=<nil>)
	I0916 10:55:36.424961  164443 status.go:343] host is not running, skipping remaining checks
	I0916 10:55:36.424968  164443 status.go:257] multinode-026168-m03 status: &{Name:multinode-026168-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026168
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-026168
E0916 10:56:06.691649   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-026168: (24.679502789s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026168 --wait=true -v=8 --alsologtostderr
E0916 10:56:25.509551   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026168 --wait=true -v=8 --alsologtostderr: (1m13.906681139s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026168
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-026168 stop: (23.532045241s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026168 status: exit status 7 (78.447721ms)

                                                
                                                
-- stdout --
	multinode-026168
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026168-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026168 status --alsologtostderr: exit status 7 (84.198811ms)

                                                
                                                
-- stdout --
	multinode-026168
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026168-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:57:57.851447  175188 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:57:57.851701  175188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:57:57.851712  175188 out.go:358] Setting ErrFile to fd 2...
	I0916 10:57:57.851718  175188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:57:57.851907  175188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 10:57:57.852085  175188 out.go:352] Setting JSON to false
	I0916 10:57:57.852119  175188 mustload.go:65] Loading cluster: multinode-026168
	I0916 10:57:57.852171  175188 notify.go:220] Checking for updates...
	I0916 10:57:57.852592  175188 config.go:182] Loaded profile config "multinode-026168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:57:57.852609  175188 status.go:255] checking status of multinode-026168 ...
	I0916 10:57:57.853118  175188 cli_runner.go:164] Run: docker container inspect multinode-026168 --format={{.State.Status}}
	I0916 10:57:57.873134  175188 status.go:330] multinode-026168 host status = "Stopped" (err=<nil>)
	I0916 10:57:57.873158  175188 status.go:343] host is not running, skipping remaining checks
	I0916 10:57:57.873164  175188 status.go:257] multinode-026168 status: &{Name:multinode-026168 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:57:57.873194  175188 status.go:255] checking status of multinode-026168-m02 ...
	I0916 10:57:57.873527  175188 cli_runner.go:164] Run: docker container inspect multinode-026168-m02 --format={{.State.Status}}
	I0916 10:57:57.891633  175188 status.go:330] multinode-026168-m02 host status = "Stopped" (err=<nil>)
	I0916 10:57:57.891654  175188 status.go:343] host is not running, skipping remaining checks
	I0916 10:57:57.891666  175188 status.go:257] multinode-026168-m02 status: &{Name:multinode-026168-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026168
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026168-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-026168-m02 --driver=docker  --container-runtime=crio: exit status 14 (67.2908ms)

                                                
                                                
-- stdout --
	* [multinode-026168-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-026168-m02' is duplicated with machine name 'multinode-026168-m02' in profile 'multinode-026168'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026168-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026168-m03 --driver=docker  --container-runtime=crio: (21.235323485s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026168
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-026168: exit status 80 (266.542592ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-026168 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-026168-m03 already exists in multinode-026168-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-026168-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-026168-m03: (1.833193372s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.45s)

                                                
                                    
x
+
TestPreload (120.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-004417 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0916 11:00:02.444116   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-004417 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m20.930993196s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-004417 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-004417 image pull gcr.io/k8s-minikube/busybox: (3.496009791s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-004417
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-004417: (5.716410449s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-004417 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0916 11:01:06.689385   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-004417 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (27.523761103s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-004417 image list
helpers_test.go:175: Cleaning up "test-preload-004417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-004417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-004417: (2.283691043s)
--- PASS: TestPreload (120.18s)

                                                
                                    
x
+
TestScheduledStopUnix (100.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-135550 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-135550 --memory=2048 --driver=docker  --container-runtime=crio: (24.22791517s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135550 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-135550 -n scheduled-stop-135550
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135550 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135550 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-135550 -n scheduled-stop-135550
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-135550
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135550 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-135550
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-135550: exit status 7 (65.527396ms)

                                                
                                                
-- stdout --
	scheduled-stop-135550
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-135550 -n scheduled-stop-135550
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-135550 -n scheduled-stop-135550: exit status 7 (61.834398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-135550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-135550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-135550: (4.641311875s)
--- PASS: TestScheduledStopUnix (100.20s)

                                                
                                    
x
+
TestInsufficientStorage (9.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-913017 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-913017 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.580659763s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ea12e2c9-44e5-441b-b4c3-cf555350b012","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-913017] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fe7f848-baa7-4069-be86-0efa02632dcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"6ea6a980-7f8b-4e13-b33e-bd8edb46e1c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"08262c5f-7f2d-4230-a5c6-08f1fb90ce8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig"}}
	{"specversion":"1.0","id":"af8b40f6-5c07-4695-890c-63036f479afe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube"}}
	{"specversion":"1.0","id":"96dc762b-69d3-4724-9723-d6a194c87f02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9f90526d-1906-4a9a-b5a9-025944c0455d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0b84c0ee-42a8-4959-88c3-2f96c625af83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e71cbe18-369d-4a05-bf19-b115dc8708c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ed2ff281-c301-471f-8978-3a6c0b646725","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5ed0a43-94cd-4da5-8b66-3e4fe4bb3fc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3c00b9bc-0303-4f66-b183-1972a2d6145c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-913017\" primary control-plane node in \"insufficient-storage-913017\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed8729a9-0e1b-42d7-9dc4-bf906c95fc72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"113aa394-8cdb-43ea-9829-965ea5a66812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f27046ff-512a-49f4-be1c-3ce12112306a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-913017 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-913017 --output=json --layout=cluster: exit status 7 (263.541448ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-913017","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-913017","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:03:10.535041  198794 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-913017" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-913017 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-913017 --output=json --layout=cluster: exit status 7 (260.505461ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-913017","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-913017","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:03:10.795687  198894 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-913017" does not appear in /home/jenkins/minikube-integration/19651-3799/kubeconfig
	E0916 11:03:10.805723  198894 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/insufficient-storage-913017/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-913017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-913017
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-913017: (1.803082019s)
--- PASS: TestInsufficientStorage (9.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2493979544 start -p running-upgrade-802794 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2493979544 start -p running-upgrade-802794 --memory=2200 --vm-driver=docker  --container-runtime=crio: (43.038501884s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-802794 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0916 11:06:06.689408   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-802794 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.602934369s)
helpers_test.go:175: Cleaning up "running-upgrade-802794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-802794
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-802794: (7.880446637s)
--- PASS: TestRunningBinaryUpgrade (75.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (199.88s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4233675698 start -p missing-upgrade-922846 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4233675698 start -p missing-upgrade-922846 --memory=2200 --driver=docker  --container-runtime=crio: (2m8.149979795s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-922846
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-922846: (10.488799314s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-922846
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-922846 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-922846 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.736214727s)
helpers_test.go:175: Cleaning up "missing-upgrade-922846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-922846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-922846: (2.030931026s)
--- PASS: TestMissingContainerUpgrade (199.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884709 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-884709 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (83.425077ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-884709] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884709 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884709 --driver=docker  --container-runtime=crio: (28.941265628s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-884709 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (157.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.301795833 start -p stopped-upgrade-911411 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.301795833 start -p stopped-upgrade-911411 --memory=2200 --vm-driver=docker  --container-runtime=crio: (2m11.918547189s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.301795833 -p stopped-upgrade-911411 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.301795833 -p stopped-upgrade-911411 stop: (2.209627117s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-911411 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-911411 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.013358601s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (157.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884709 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884709 --no-kubernetes --driver=docker  --container-runtime=crio: (17.899991371s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-884709 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-884709 status -o json: exit status 2 (274.178908ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-884709","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-884709
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-884709: (1.852236846s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884709 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884709 --no-kubernetes --driver=docker  --container-runtime=crio: (6.070377313s)
--- PASS: TestNoKubernetes/serial/Start (6.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-884709 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-884709 "sudo systemctl is-active --quiet service kubelet": exit status 1 (250.618958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-884709
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-884709: (1.181357536s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-884709 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-884709 --driver=docker  --container-runtime=crio: (10.645159061s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-884709 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-884709 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.26238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-838467 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-838467 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (187.683937ms)

                                                
                                                
-- stdout --
	* [false-838467] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:05:06.027848  220963 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:05:06.027997  220963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:06.028007  220963 out.go:358] Setting ErrFile to fd 2...
	I0916 11:05:06.028014  220963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:06.028299  220963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3799/.minikube/bin
	I0916 11:05:06.029026  220963 out.go:352] Setting JSON to false
	I0916 11:05:06.030569  220963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2846,"bootTime":1726481860,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:05:06.030721  220963 start.go:139] virtualization: kvm guest
	I0916 11:05:06.032831  220963 out.go:177] * [false-838467] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:05:06.034255  220963 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:05:06.034282  220963 notify.go:220] Checking for updates...
	I0916 11:05:06.037635  220963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:05:06.039255  220963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3799/kubeconfig
	I0916 11:05:06.040631  220963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3799/.minikube
	I0916 11:05:06.042151  220963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:05:06.043700  220963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:05:06.045858  220963 config.go:182] Loaded profile config "kubernetes-upgrade-749637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:05:06.046004  220963 config.go:182] Loaded profile config "missing-upgrade-922846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0916 11:05:06.046118  220963 config.go:182] Loaded profile config "stopped-upgrade-911411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0916 11:05:06.046247  220963 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:05:06.083303  220963 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:05:06.083410  220963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:05:06.147648  220963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2024-09-16 11:05:06.136965012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:05:06.147757  220963 docker.go:318] overlay module found
	I0916 11:05:06.150066  220963 out.go:177] * Using the docker driver based on user configuration
	I0916 11:05:06.151576  220963 start.go:297] selected driver: docker
	I0916 11:05:06.151594  220963 start.go:901] validating driver "docker" against <nil>
	I0916 11:05:06.151607  220963 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:05:06.154077  220963 out.go:201] 
	W0916 11:05:06.155693  220963 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0916 11:05:06.157094  220963 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-838467 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):

                                                
                                                

                                                
                                                
>>> k8s: api server logs:

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:

                                                
                                                

                                                
                                                
>>> k8s: cms:

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838467"

                                                
                                                
----------------------- debugLogs end: false-838467 [took: 1.813347233s] --------------------------------
helpers_test.go:175: Cleaning up "false-838467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-838467
--- PASS: TestNetworkPlugins/group/false (2.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-911411
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestPause/serial/Start (40.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-259137 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-259137 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (40.752058178s)
--- PASS: TestPause/serial/Start (40.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-259137 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-259137 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.633668165s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (37.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (37.614012598s)
--- PASS: TestNetworkPlugins/group/auto/Start (37.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-259137 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-259137 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-259137 --output=json --layout=cluster: exit status 2 (315.059998ms)

                                                
                                                
-- stdout --
	{"Name":"pause-259137","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-259137","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-259137 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-259137 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.69s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-259137 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-259137 --alsologtostderr -v=5: (2.690195114s)
--- PASS: TestPause/serial/DeletePaused (2.69s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.209990173s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-259137
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-259137: exit status 1 (18.644428ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-259137: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-838467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m7.378347857s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9xqhf" [e8495de4-de26-4fb0-a7f8-f4d04eca3298] Running
E0916 11:09:09.759604   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003750834s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-838467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.569784569s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0916 11:10:02.444399   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.436650262s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2kvw6" [0e5e7a81-b6bf-4bef-b9f5-529425009b6c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004091966s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-838467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-838467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.419573637s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pm2mg" [d37ec762-70dd-4744-bba4-a221e03c35be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003736812s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-838467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0916 11:40:02.444666   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.143986696s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-838467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-838467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.298283215s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-406673 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-406673 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m19.991097657s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-838467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-406673 --alsologtostderr -v=3
E0916 11:43:38.402843   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-406673 --alsologtostderr -v=3: (5.757612401s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-406673 -n old-k8s-version-406673
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-406673 -n old-k8s-version-406673: exit status 7 (64.404641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-406673 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h95rv" [a69b94e2-51ee-4cb5-8692-7882d7361328] Running
E0916 11:50:02.443836   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004239959s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-406673 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-406673 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673: exit status 2 (295.775644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-406673 -n old-k8s-version-406673
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-406673 -n old-k8s-version-406673: exit status 2 (302.413765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-406673 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-406673 -n old-k8s-version-406673
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-406673 -n old-k8s-version-406673
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-179932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 11:50:34.294600   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:50:54.647889   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.997526   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:06.689512   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-179932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (55.703992072s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-179932 --alsologtostderr -v=3
E0916 11:51:22.349902   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-179932 --alsologtostderr -v=3: (5.813353038s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (5.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-179932 -n no-preload-179932
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-179932 -n no-preload-179932: exit status 7 (66.749234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-179932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (261.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-179932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 11:52:57.427876   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:04.833143   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-179932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m21.514995965s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-179932 -n no-preload-179932
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (261.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qznkx" [2e06c663-e6f4-4dc5-96d5-e2c7c06a77c6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003442181s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-179932 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-179932 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-179932 -n no-preload-179932
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-179932 -n no-preload-179932: exit status 2 (303.561387ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-179932 -n no-preload-179932
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-179932 -n no-preload-179932: exit status 2 (300.308236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-179932 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-179932 -n no-preload-179932
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-179932 -n no-preload-179932
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-451928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 11:56:06.689888   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:56:20.479181   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-451928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (39.476471407s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-451928 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-451928 --alsologtostderr -v=3: (5.800912238s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (5.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928: exit status 7 (66.816813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-451928 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-451928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 11:57:42.400903   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:57:57.428089   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:59:04.833434   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:59:09.765717   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:59:20.491094   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:59:58.540869   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:02.444484   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:26.242638   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:27.896157   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:34.294641   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:46.836121   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:46.842596   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:46.854051   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:46.875493   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:46.917117   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:46.998634   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:47.160199   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:47.482145   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:48.124239   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:49.406199   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:51.967794   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:54.648719   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:00:57.089520   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:01:06.689677   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:01:07.331023   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-451928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.287301813s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zqv8v" [264208fe-d84b-493b-aec2-9ef0c7ae7794] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003564542s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-451928 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-451928 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928: exit status 2 (295.922637ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928: exit status 2 (299.265801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-451928 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-451928 -n default-k8s-diff-port-451928
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-483277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 12:01:57.359063   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-483277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (24.51012574s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-483277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-483277 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-483277 --alsologtostderr -v=3: (1.202071822s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-483277 -n newest-cni-483277
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-483277 -n newest-cni-483277: exit status 7 (62.82773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-483277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-483277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 12:02:08.774630   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-483277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (12.096577121s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-483277 -n newest-cni-483277
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-483277 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-483277 --alsologtostderr -v=1
E0916 12:02:17.712145   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-483277 -n newest-cni-483277
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-483277 -n newest-cni-483277: exit status 2 (304.565515ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-483277 -n newest-cni-483277
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-483277 -n newest-cni-483277: exit status 2 (299.257688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-483277 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-483277 -n newest-cni-483277
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-483277 -n newest-cni-483277
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (68.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-132595 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 12:02:57.427585   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:03:05.517640   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:03:30.696906   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-132595 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m8.875115502s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (68.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-132595 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-132595 --alsologtostderr -v=3: (5.766846677s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (5.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132595 -n embed-certs-132595
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132595 -n embed-certs-132595: exit status 7 (98.146255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-132595 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (261.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-132595 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0916 12:04:04.833112   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:04:58.541117   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/old-k8s-version-406673/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:05:02.444283   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/functional-546931/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:05:34.293772   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/calico-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:05:46.836794   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:05:54.648302   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/enable-default-cni-838467/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:06.690339   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/addons-821781/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:14.538261   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/no-preload-179932/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:19.580850   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:19.587214   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:19.598588   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:19.619981   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:19.661432   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:19.742797   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:19.904362   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:20.226087   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:20.867457   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:22.148996   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:24.711224   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:29.832780   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:40.075100   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:00.557168   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:41.518937   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:57.427855   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/auto-838467/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-132595 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m21.457868079s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132595 -n embed-certs-132595
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (261.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x2xqb" [9915d875-dc88-4715-ae81-f996fbf96461] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003329916s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-132595 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-132595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132595 -n embed-certs-132595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132595 -n embed-certs-132595: exit status 2 (302.117579ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-132595 -n embed-certs-132595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-132595 -n embed-certs-132595: exit status 2 (297.228052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-132595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132595 -n embed-certs-132595
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-132595 -n embed-certs-132595
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.65s)
E0916 12:09:03.440539   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/default-k8s-diff-port-451928/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:09:04.833196   11208 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3799/.minikube/profiles/kindnet-838467/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (25/306)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-838467 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):

                                                
                                                

                                                
                                                
>>> k8s: api server logs:

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:

                                                
                                                

                                                
                                                
>>> k8s: cms:

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838467"

                                                
                                                
----------------------- debugLogs end: kubenet-838467 [took: 2.212166291s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-838467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-838467
--- SKIP: TestNetworkPlugins/group/kubenet (2.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-838467 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):

                                                
                                                

                                                
                                                
>>> k8s: api server logs:

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:

                                                
                                                

                                                
                                                
>>> k8s: cms:

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-838467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838467"

                                                
                                                
----------------------- debugLogs end: cilium-838467 [took: 1.826406018s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-838467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-838467
--- SKIP: TestNetworkPlugins/group/cilium (2.02s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-946599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-946599
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard